In an era where AI assistants have become commonplace on our phones and smartwatches, one San Francisco startup is pushing the boundaries of what’s possible with artificial intelligence and wearables. Meet Omi, an innovative AI device that promises to transform how we capture, process, and act on information in our daily lives. Developed by Based Hardware Inc., Omi represents a bold new direction in the wearable AI revolution—one that prioritizes open-source transparency, genuine utility, and the ambitious goal of eventually reading your mind.
What Is Omi?
At its core, Omi is a personal AI that listens, remembers conversations, takes notes, and does tasks for you, helping you stay organized and proactive with real-time notifications and comprehensive memory assistance. But describing it simply as a voice recorder would undersell its ambitions and capabilities.
Omi comes in two physical forms. The most accessible version is a sleek, pendant-like device roughly the size of an Apple AirTag that you wear around your neck. The second form factor is an experimental button that can be attached to the side of your forehead using medical tape. Both versions connect to your smartphone via the Omi app available on iOS, Android, and Mac.
The philosophy behind Omi is captured in the company’s tagline: “Thought to Action.” Rather than replacing your smartphone, Omi is designed to augment your cognitive abilities by capturing conversations and interactions throughout your day, transcribing them, and automatically extracting summaries, key insights, and actionable tasks.
How It Works
The user experience is straightforward. When wearing the pendant, the AI assistant can be activated by saying “Hey Omi,” though the device also operates in ambient listening mode, continuously capturing conversations. The captured audio is streamed to your phone, where the Omi app uses AI models to transcribe it, analyze the content, and generate structured summaries.
What makes this particularly useful for professionals and students is the automatic extraction of action items. Instead of manually reviewing hours of meeting notes, Omi extracts the tasks you need to complete and exports them directly to productivity tools like Todoist, Asana, or Google Tasks. The device also offers real-time web search capabilities, allowing it to fetch live information about weather, prices, or facts during conversations.
The Brain Interface: Omi’s Moonshot
The most intriguing aspect of Omi is its vision for the future. The team behind the wearable AI device notes that it may be able to read brain data with a separate brain-interface module. The founder, Nik Shevchenko, has stated that the company’s ultimate goal is to read the mind thoroughly. Rather than attempting this immediately, the team is taking a phased approach, starting with the simplest use case: device activation through brain signals.
The first batch of Omi devices shipping in Q2 2025 will be audio-only. However, the first 5,000 Omi orders will get priority access to a brain-computer module once it’s released in Q2 2025, though full brain-interface functionality is expected in 2026-2027.
Pricing and Accessibility
One of Omi’s most attractive features is its price point. Priced at $89, the lightweight device can be worn on your wrist or as a necklace, making it one of the most affordable AI wearables on the market. Omi includes 1,200 minutes of free transcription every month, and users can upgrade to Omi Unlimited for unlimited usage. Interestingly, the device works with both iPhone and Android phones without requiring a separate hardware purchase—the app bridges everything together.
Open-Source Philosophy and Privacy
In a landscape crowded with closed-ecosystem devices, Omi stands apart through its commitment to open-source development. Shevchenko built Omi on an open-source platform that lets users see where their data is going or choose to store it locally. This transparency extends to developers as well.
The platform’s open-source nature has already sparked significant community interest. Developers have already created more than 250 apps on Omi’s app store, expanding the device’s capabilities beyond what the core team could build alone. For privacy-conscious users, conversations can be stored locally on your phone or in the cloud, with data encrypted, and everything can be deleted in one click in the Omi app.
The Cautious Path Forward
Unlike other AI wearables that have launched to mixed reviews—think of the disappointing Humane AI Pin or the rocky rollout of the Rabbit R1—Omi’s creator is taking a methodical approach. Rather than rushing to mass consumer adoption, Shevchenko is prioritizing developer kits first to validate real-world utility before courting everyday consumers.
The company is focused on building something people want to use, with the team not chasing mass adoption right away. This strategy reflects a broader maturation in the wearable AI space, where startups are learning that hype without substance doesn’t translate to lasting success.
Looking Ahead
The future of Omi extends beyond transcription and task management. Next, the company is working on AI-powered glasses similar to Meta’s AI glasses, with more ambitiously future Omi plans, including brain-computer interface functionality.
These developments suggest that Omi isn’t content to be just another AI assistant. Instead, it’s positioning itself at the intersection of multiple emerging technologies—wearables, ambient computing, and neurotechnology—to create a genuinely transformative tool for human productivity and memory.
The Bigger Picture
Omi’s rise represents a shift in how we think about AI integration into our lives. Rather than forcing users to adapt to new interfaces or abandoning privacy for convenience, Omi is exploring a different path: a device that understands context, respects privacy, and genuinely enhances human capabilities without replacing the cognitive work of being human.
Whether Omi ultimately delivers on its ambitious vision—including the promised brain-computer interface—remains to be seen. But the fact that a small San Francisco startup is even attempting this, backed by a community of developers and early adopters, suggests we’re entering an exciting new chapter in the story of artificial intelligence and human augmentation.