

The Interface Is Still Early
The current relationship with AI still feels provisional.
Despite rapid progress in capabilities, the way interaction happens today is awkward and incomplete. Intelligence is mostly accessed through interfaces designed decades ago, locked into a repetitive loop of prompts and responses, with the chatbot UI. Context barely carries over, memory remains shallow, and systems rarely understand users beyond the last query. It’s increasingly clear that this is not the shape this relationship will ultimately take.
We have been here.
Early personal computers were powerful long before they were usable. IBM machines once filled entire rooms, impressive but detached from everyday life. The real shift came not from more compute, but from a vision of accessibility, when hardware constraints eased and interface design allowed computing to fade into the background. From that point on, computers stopped feeling like machines and became tools people could live with, eventually turning into a constant presence across work and daily life.
For the same reason, chat-based interfaces are unlikely to be the final form of human–AI interaction. They are transitional. Prompting feels more like negotiating with a system than interacting with something that understands continuity or intent. Context is fragmented, memory is limited, and the interaction lacks a sense of humanity. AI is still a statistical system, but it is beginning to resemble an intelligent layer that anyone can access. Over time, that layer is unlikely to remain confined to screens. It will sit on top of infrastructure, reshaping interaction itself rather than forcing humans to adapt to it.


Most transitions get awkward before they get right.
Most interface revolutions pass through an uncomfortable middle ground. New forms of interaction are needed, but they must coexist with existing ones for a long time. Any product that tries to fully replace phones or computers today asks for a binary choice users are not ready to make. Humane’s AI Pin illustrates this tension well. It assumes displacement rather than augmentation. Its core use cases (ambient queries, lightweight assistance) aren’t frequent or valuable enough to justify a standalone device. Similar dynamics appear in products like Rabbit or Friend, which lean toward companionship without solving deep contextual intelligence. These experiments are useful, but they reveal how difficult it is to align interface, context, and utility at the same time.






Staying close to human experience matters.
Despite being early (and imperfect in execution) Meta appears better positioned than most. Their approach stays close to human experience instead of abstracting away from it. Vision, audio, and social context provide a much richer grounding than isolated devices. Wearables that sit near focal points of perception can integrate into existing behaviors without disrupting them. Crucially, this approach doesn’t attempt to replace current devices overnight. By augmenting what already exists, it creates a more plausible path to short-term adoption and long-term relevance.






What matters next is how intelligence is experienced, not exposed.
Jony Ive’s involvement with OpenAI is a subtle but important signal. It suggests that the next meaningful leap in AI will not come from marginal model improvements alone, but from how intelligence blends into interaction. As scaling laws flatten and model gains become less dramatic, interface design starts to matter more. While this shift currently appears consumer focused, it won’t remain there. Once AI becomes seamless, contextual, and humane, it will stop feeling like a tool and start feeling like part of how work, and life, actually happens.