Have I gotten your attention with this provocative headline? Give me a bit of rope on this.
If you believe the rumors, as well as the carefully choreographed media leaks, OpenAI might be preparing to launch something that could change everything. Not just the way we use computers but how we interact with information, with each other, and with the physical world around us.
Forget screens. Forget apps. Forget pulling a glowing slab out of your pocket 200 times a day. Sam Altman and legendary former Apple designer Jony Ive are reportedly deep into building what they believe is the successor not just to the iPhone but to the very idea of the smartphone.
It’s not a phone, we’re told. It’s not glasses — and it’s not just another voice assistant stuck in a plastic shell. So what is it? That remains deliberately vague, but the vision is clear: a discreet, possibly wearable, and certainly AI-infused device that exists at the intersection of presence, awareness, and utility — an invisible interface between you and your life.
Interestingly enough, this isn’t about smarter devices. It’s about something more profound: ambient computing. A system so integrated, so subtle, and so intelligent, it essentially disappears — while still knowing enough to be useful.
Designing a New Paradigm
To understand the ambition of this project, look no further than the partnership behind it. Altman is the face of OpenAI, the company behind ChatGPT, Sora, and a suite of tools rapidly redefining human-computer interaction. Jony Ive is the industrial design genius behind the iMac, iPod, iPhone, and Apple Watch, whose fingerprints are all over modern consumer electronics.
Together, they’ve formed a new venture under the OpenAI umbrella by acquiring Ive’s design studio, io. The goal: to build a family of AI-native products from scratch, unshackled by the legacy expectations of keyboards, screens, or app stores. Altman has called it the “biggest thing we’ve ever done as a company.” Ive compared the excitement to what he felt 30 years ago when designing the original Apple computer.
The device — or, more accurately, this upcoming category of devices — will not be just an accessory. It is meant to be a companion or, more to the point, a system. Something you carry, wear, or keep nearby. Something that understands where you are, what you’re doing, and what matters most and then acts on your behalf with minimal friction.
In a private meeting with OpenAI staff, Altman made it clear: this isn’t a phone. It’s also not glasses. It’s designed to rest on a desk, slip into a pocket, or clip onto clothing. Something lightweight. Something aware. Something ambient. But most importantly, you don’t even realize you’re wearing it.
Perhaps most crucially — it’s not meant to show you the future. It’s meant to know it.
Device Without a Screen
So, how does a device without a screen even function? Quite easily, it turns out, if you redefine what an interface is.
This new product is expected to rely primarily on voice and audio interactions for its mode of operation. You speak. It listens. It interprets, and it responds. The vision here is less “Hey Siri” and more “Why didn’t you remind me I promised to call my mother today?”
This interface goes beyond commands and queries. It requires context. It requires memory. It requires intuition. It isn’t a computer you use — it works for you.
Without a screen to distract you or dominate your attention, the device could be, paradoxically, more present. Always there, constantly aware, yet never the center of your focus.
Already, we see prototypes from other companies testing similar ideas. Google has showcased smart glasses that utilize conversational AI and floating displays. Meta has experimented with voice-activated Ray-Bans. Humane tried (and struggled) to introduce a wearable AI pin. But OpenAI and Ive seem to be aiming for something less flashy and more foundational.
Not a gadget. A companion.
Listening Is the Feature, Not the Flaw
For this device to be truly useful, it needs to know everything — or at least almost everything — about your life. That includes your location. Your schedule. Your conversations. Your messages. Your heart rate. Even your mood.
In short, this isn’t just a new product. It’s a new contract between humans and machines. A promise that, in exchange for persistent awareness and frictionless assistance, users will need to consent to unprecedented levels of access.
It will likely be the most expansive consent model in tech history. To derive value from the system, users will need to allow the device to listen continuously, transcribe, and analyze their environment. Emails, texts, voice memos, live conversations — nothing is off limits if the assistant is to be truly proactive.
Many users may accept this trade. Why? Because the upside is enormous. Imagine a device that detects you’re frustrated during a meeting and silently suggests rescheduling your next one. Or hears you cough and offers a health recommendation. Or observes your silence and offers to draft a reply to a difficult email.
This technology isn’t passive AI. It’s participatory intelligence. But it also means the barrier to adoption isn’t hardware — it’s trust.
Battery Life Breakthrough
One of the most immediate technical concerns around a device like this is power. A phone can last a day. A laptop, maybe two (thank you, Apple Silicon and Qualcomm Snapdragon X Elite chips). However, a truly ambient, wearable AI assistant must last for days.
The good news is the pieces are already falling into place.
Low-power AI cores are becoming increasingly efficient. Custom silicon optimized for always-on voice recognition and local inference can operate for extended periods without requiring bulky batteries. Wireless charging and modular battery accessories could extend uptime without compromising design.
Apple’s recent patents on “wearable loop” devices suggest a flexible, sensor-rich form factor with haptics, microphones, and dynamic feedback — all in a design that can be worn or stored with ease. These loop-like wearables could potentially shift shapes or offer modular functionality, expanding use cases while staying lightweight and power-efficient.
The engineering challenge, in other words, isn’t insurmountable. What matters is how seamlessly power, sensors, and AI can be integrated into something people actually want to wear or carry.
Tipping Point for Interaction
Whether this upcoming device succeeds or not will likely hinge on one critical factor: behavior change.
Altman and Ive aren’t just designing a new product. They’re trying to shape a new set of habits — replacing 15 years of swipe, tap, and scroll with something more natural, more human.
Instead of unlocking your phone and typing into a search bar, you simply ask aloud, “What’s the latest on my flight?” Instead of staring at your calendar, you mutter, “Anything I can reschedule today?”
Instead of doomscrolling newsfeeds, you hear a quiet summary of things that matter to you — and ignore the rest.
The challenge, of course, is cultural. People aren’t just attached to their phones. They’re addicted to the dopamine hit of notifications, the comfort of visual control, and the illusion of productivity. Replacing that with something voice-driven, proactive, and screenless isn’t just a UI overhaul. It’s a psychological reboot.
But if it works, it might feel less like using a computer and more like having an invisible butler who actually understands you.
The Future Hums, Not Glows
Interestingly, Apple’s WWDC 2025 came and went without any hint of a dedicated ambient computing device despite growing industry expectations.
While Apple introduced notable software updates — such as its “Apple Intelligence” platform and continuity features — it avoided unveiling anything to rival emerging screenless AI devices like Humane’s Pin or Meta’s smart glasses.
However, Apple might still be laying groundwork behind the scenes. A recent patent describes a flexible, loop-shaped wearable with sensors, microphones, and haptics that could serve as a discreet AI companion. The design hints at Apple’s potential long-term interest in ambient computing, even if no product is imminent.
In contrast, the team of Sam Altman and Jony Ive is racing ahead with bold ambitions to deliver an always-on AI device that forgoes traditional screens and embraces continuous context sensing.
Apple’s restraint may reflect a cautious approach to user trust, battery life, and privacy, but it could also risk falling behind if a competitor defines the category first.
My key question is whether Apple’s slow, privacy-first strategy will prove wiser in the long run or if it will miss the moment as others shape the future of ambient AI. For now, the race belongs to those outside Cupertino, and OpenAI may be the company that is first to leave the gate.
Let’s face it: In a world where AI is no longer confined to keyboards and screens, the next revolution may come not from what you see but from what you barely notice.
A Device That Anticipates You
OpenAI’s upcoming wearable — or ambient device, companion, or whatever name it eventually earns — could very well be the beginning of that shift. If it succeeds, it will mark the moment when computing stopped demanding our attention and started supporting it, when we moved from interacting with technology to living alongside it.
That won’t happen overnight — and the risks, especially those related to privacy, surveillance, and overdependence, will be real. The massive opt-in decision for a device like this will require a level of trust so immense and comprehensive that I’m still unclear whether most consumers will want to make it unless the benefits are equally substantial, compelling them to take that leap.
However, if Sam Altman and Jony Ive get this right, the smartphone may one day look like the typewriter does now: elegant, iconic, and fundamentally obsolete.
Perhaps, in a not-so-distant future, when someone asks, “Where’s your phone?” you might smile and say, “I don’t use one anymore.”
Because by then, you won’t need to. The device — small, ambient, and always aware — will already be with you. Not in your hand or your pocket, but in your environment. Listening. Learning. Anticipating.
If I’m right, this invisible companion won’t just change how we compute — it will change what computing is. You won’t have to ask. It will already know.