A smartphone without a display? It sounds like nonsense, but the enthusiastic founders of Hu.ma.ne managed to turn this idea into a functional product. That is, functional enough to put together an impressive demonstration video.
Through the American operator T-Mobile, they offer their own connectivity for a monthly subscription. And everything else will be arranged by “artificial intelligence”, on whose abilities the whole concept is based. And it falls.
Check out the Humane Pin AI wearable:
Photo: Hu.ma.ne
The box is about the size of a headphone case and also a similar shape. It has two batteries, the smaller one directly in the device, the larger one is on the other side of the garment and magnetically holds the device in place.
What can Humane AI Pin do?
From a hardware point of view, the AI Pin is a standalone device that connects to the Internet through a mobile network. It is equipped with a camera, a depth sensor, a laser projector and a surround speaker. It uses GPS and a compass to know where it is and which way you’re looking.
The Humane AI Pin resembles in many ways the concept of Google Glass from 2013. This device raised great expectations at the time, but gradually became synonymous with debacle. One of the reasons was the suspicion of the surroundings: if a person came somewhere with Google Glass on, everyone immediately began to wonder if the person was taking pictures or filming them.
The manufacturer Humane AI Pin wants to avoid these concerns: there is an “unbreakable” diode on the top of the device that lights up whenever the device is recording or listening. The device will shoot videos and take pictures using a wide-angle lens, so you don’t need to aim in any particular way, you just “push” approximately in the given direction.
Artificial intelligence in the role of advisor and administrator
The box also has a projector that can project basic information onto your palm. It looks really futuristic, but also quite impractical. In essence, it is a display with the resolution of a smaller watch, which can only handle one color, and readability is affected by how flat your palm is.
The projector has taken care of displaying the controls for playing music right on your hand. By tilting the palm, the user can skip tracks or pause the music. You can also have text displayed, but only a few lines of text rather than reading an article.
Photo: Hu.ma.ne
Dictate a text message and read it before sending it on your hand using a laser projector.
Most meaningful communication should therefore take place via voice. The presentations flashed some impressive examples. The selection (with translated instructions) can be found in the introductory video of this article.
The device ordered the book it was viewing over the Internet. It took pictures and recorded videos. It started a call, answered a text message, helped navigate around the city or answered quite complex questions. Although… we’ll get back to those answers later.
Why no display?
There are dozens of similar visions every year. Why should we pay attention to this science fiction idea? Perhaps because the founders of Hu.ma.ne are no beginners. “I spent 22 amazing years at Apple,” recalls Imran Chaudhri, the visionary behind it. For example, he participated in the design of the interface for Steve Jobs’ landmark project – the iPad tablet – and there Chaudhri met Bethany Bongiorno. They are united not only by marriage, but also by their joint departure from Apple and the founding of Hu.ma.ne in 2017.
Photo: Hu.ma.ne
Spouse Imran Chaudhri and Bethany Bongiornová during the presentation of the first commercial product of Hu.ma.ne.
In six years, they succeeded in turning the vague vision of a “device without a display” into a concrete, functional and salable product. This is no small feat, and it would not have been possible without influential investors. Sam Altman, the head and founder of OpenAI, supported the project with his money. After all, it is their technology (the GPT-4 language model known from ChatGPT) that provides the artificial intelligence that is supposed to ensure that the device does without a display.
“As the performance of our computers increased, so did their size,” recalls Chaudhri. From a personal computer to a laptop and smartphone to a smart watch.
“What’s the next step? Some people believe that some VR/AR glasses, but they just move the display a few millimeters in front of our eyes,” complained Chaudhri. According to Chadhri and Bongiorno, the best and most “humane” technology is one that is virtually invisible.
Photo: Hu.ma.ne
The futuristic vision will surely find its enthusiastic supporters.
The basic wake-up gesture resembles the activation of the classic Star Trek communicator. That’s probably not a coincidence. But we sometimes forget why futuristic technology is usually depicted as voice communication in classic films: it was the scripturally easiest and production-cheapest way to depict future technology. The actors simply speak a command to the computer and someone softly reads the “omniscient” computer’s response in a machine voice. Creating a credible looking futuristic display for the needs of the scene would be many times more expensive.
We don’t just look for music on our mobile phones, we also look for partners. And that’s not something you want to entrust to an artificial intelligence that communicates with you without being able to check it on the display.
But imagine if you had to solve everything you solve now with your phone, just using your voice. It’s just like if a person was walking next to you with your phone in his hand and you were telling him what to find there. If he doesn’t know exactly what you want, he will ask or decide for you. Suddenly, this doesn’t seem like an advantage, but rather a recipe for countless frustrating misunderstandings.
Embarrassing mistakes and practical problems
After all, even during the demonstration itself – which was edited, it was not a live demonstration on stage – attentive viewers noticed two factual errors.
“When is the next eclipse and where will it be best seen?” asked Chaudhri. And while AI Pin searches for an answer, Bongiorn explains, “Here, AI is traversing the web looking for knowledge everywhere on the internet.”
“The next total solar eclipse will be on April 8, 2024,” the AI says after a few seconds. “It will be best seen from Exmouth, Australia, and East Timor.”
The presentation continued and no one from the production team apparently even thought to check if the answer was correct (apparently they didn’t learn from Google’s similar astronomical embarrassment). In any case, attentive viewers were not lazy and checked the answer – the date fits, but the place does not.
Photo: Hu.ma.ne
Where will it actually be possible to observe the solar eclipse on April 8, 2024? The best place is definitely not in Australia and East Timor…
The next example is even funnier. Chaudhri shows a handful of almonds and asks how much protein is in this food.
Photo: Hu.ma.ne
“How much protein?” is the question as a handful shows an estimated 10 to 15 almonds.
“These almonds have 15 grams of protein,” the AI estimates. In fact, 15 almonds contain less than four grams of protein. So, again, an impressively quick answer, but on closer inspection, completely wrong.
We can see that the language model is clearly doing what we pointed out a year ago: bullshit without fuss. More precisely: he puts words together in such a way that the answer sounds credible in the given context. Regardless of whether it is true.
That’s okay with the sample video. We all understand that for showmanship purposes it doesn’t matter where you can watch the solar eclipse or how much protein is in a handful of almonds. After all, we also come across false information on the display of an ordinary smartphone.
Photo: Hu.ma.ne
It is worth noting that all the actors in the published ads on AI Pin are wearing solid clothes: jackets or dresses. On a loose T-shirt, the box would probably look pretty good.
But the problem is that we will also see the source of the information on the display and we can consider whether we trust it or whether we will search further. We will consider the connections, the credibility of the author of the text, perhaps subconsciously also the font or the overall professional appearance of the website.
We do it ten or hundreds of times every day. We are constantly evaluating different sources and deciding how much we can trust them. That’s also why we always have the display in front of our eyes – today, in addition to consuming funny videos and scary news, we also shop, plan a trip, write e-mails and share photos through a smartphone.
Finding out the amount of protein in almonds is one thing. But we deal with a hundred times more important things via mobile. We are not only looking for music to listen to. We also buy plane tickets, choose a school, sell cars, look for business and romantic partners… This is usually not something we want to blow away with one voice command.
In short, we have moved a large part of our lives to the screens of our phones. If someone is going to replace that display for us and free us from its clutches – I understand that sounds tempting – that “someone” has to be extremely trustworthy to us. And the current AI systems are not yet.
Were they ahead of their time?
But the vision of the “invisible computer” is not so meaningless. Legendary technology journalist Walt Mossberg summed it up nicely in his concluding essay The Disappearing Computer: “Technology is a great thing, but for 40 years it has been too unnatural, a sort of appendage to life. What is now brewing in laboratories around the world promises to change that.”
According to him, the future development of technology will no longer focus on specific devices, but rather on overall experiences. “As a gadget lover, I’m a little sad,” Mossberg wrote at the time in 2017. “But as someone who believes in technology, I’m really excited about it.”
The Humane AI Pin – in its current form, with current AI capabilities and in the current internet environment – is not a realistically usable tool. Definitely not for most users.
Even Sam Altman himself – Hu.ma.ne’s most famous investor and head of OpenAI – is not sure that this is the right way. “That will be up to the customers to decide,” Altman told the New York Times. “Maybe they’re ahead of their time, and maybe people will welcome it as a better alternative to the phone.” He also added that many devices that looked like a surefire hit ended up being sold at 90% off…
But there are sure to be a few enthusiastic experimenters who will test this concept in different situations. And from the experience they gather and the dead ends they discover, regular users will also benefit over time.
Photo: collage: Pavel Kasík, Seznam Správy, AI visualization
Would you like to have a robot that sees what you see and can help you make decisions? Much will depend on how much trust you place in such “personal artificial intelligence”.
The current “robot” clipped to a jacket offers, in theory, everything needed to make the vision of the founders of Hu.ma.ne work. The only thing missing is a sufficiently capable artificial intelligence that will “in the background” solve all the problems related to the missing display.
But that can change. In five years, in ten years, or in a year. Humans will probably get used to the ubiquitous AI. They begin to trust artificial intelligence more than themselves. And then they won’t find it so strange that they won’t even be able to look at the display.