Imagine being promised a genius-level upgrade to your intellect, only to end up with a digital assistant that feels more like a relic from the early days of computing—right in front of your eyes. That’s the reality I faced with AI glasses, a gadget that claimed to make me smarter but instead felt like Clippy had taken up residence on my face. But here’s where it gets controversial: Is this the future of wearable tech, or just a glorified distraction? Let’s dive in.
This is Optimizer, a weekly newsletter from The Verge’s senior reviewer, Victoria Song, that dissects the latest in phones, smartwatches, apps, and other gadgets promising to revolutionize your life. Delivered every Friday at 10 AM ET, Optimizer is your go-to guide for separating hype from reality. Subscribe here [https://www.theverge.com/newsletters] and join the conversation. We’ll be back on November 7th after a brief hiatus.
As I mentioned last week [https://www.theverge.com/column/797938/optimizer-newsletter-wearable-hell-smart-glasses-smart-rings-ai-hardware], my body parts are becoming scarce real estate for testing gadgets. Recognizing my limits, I enlisted the help of senior editor Sean Hollister, a fellow enthusiast of smart glasses, to test Halo Glass—an always-listening AI companion embedded in a pair of glasses. And this is the part most people miss: While the promise of a second memory sounds appealing, the ethical and practical challenges are staggering.
Halo Glass is the brainchild of AnhPhu Nguyen and Caine Ardayfio, two former Harvard students who made headlines last year for their controversial project involving real-time doxing using Ray-Ban Metas [https://www.theverge.com/2024/10/2/24260262/ray-ban-meta-smart-glasses-doxxing-privacy]. In August, they announced [https://techcrunch.com/2025/08/20/harvard-dropouts-to-launch-always-on-ai-smart-glasses-that-listen-and-record-every-conversation/] their latest venture: AI glasses that listen, record, transcribe, and provide real-time answers to your conversations. Think of it as a blend of Cluely [https://www.theverge.com/ai-artificial-intelligence/654223/cheat-on-everything-ai], an AI tool for “cheating” on everything, and Bee [https://www.theverge.com/reviews/627056/bee-review-ai-wearable], a wearable AI that claims to be your second memory—but in glasses form.
Naturally, I was eager to test them. Sean and I spoke with Ardayfio, who revealed that while Halo will eventually have its own hardware, we’d be among the first to test their app on the Even Realities G1 Glasses [https://www.evenrealities.com/]. Even Realities may not be a household name, but they impressed at CES 2025 [https://www.theverge.com/2025/1/10/24340208/ces-2025-smart-glasses-rokid-halliday-xreal-vuzix-nuance-audio]. Our task? Test the prototype, compare notes, and share our experience. Simple, right?
Wrong.
The allure of a second memory is undeniable. Who wouldn’t want to stop forgetting tasks or have definitions pop up during conversations? But here’s the catch: always-on AI wearables raise a host of ethical questions. For instance, Sean lives in California, where recording conversations requires consent from all parties. Is he breaking the law by wearing these glasses without disclosing it? And what about his wife, whose job demands confidentiality? These concerns forced Sean to test the glasses outside his home. My spouse, already fed up with always-listening devices after my Bee review [https://www.theverge.com/reviews/627056/bee-review-ai-wearable], made it clear these glasses weren’t welcome indoors. Our solution? Wear the glasses during a video call to test them together.
In theory, Halo works seamlessly: a live transcription of your conversation, occasional factoids, and a post-conversation summary with action items. Sounds perfect, right? In practice, it was a comedy of errors. Our call began with a 20-minute troubleshooting session involving firmware updates and disconnections. To activate the display, you must look up at a 40-degree angle—imagine throwing your head back like a sea lion. We adjusted it to 15 degrees, but it still felt absurd.
While prototype quirks are expected, the idea of AI glasses making you appear smarter without others knowing feels unsettling. Sean and I debated whether these devices help us stay present or if they alter our authenticity. Can you truly be yourself when you’re constantly being recorded? How do you protect the privacy of loved ones? These questions lingered as we tested the glasses.
The experience was surreal. Every time the AI interjected, one of us had to throw our head back to view the alert. Picture two adults bobbing like sea lions mid-conversation. The AI often provided useless trivia—like defining “ensconced” after I used it correctly—or got stuck in loops, repeatedly telling us mobile phones emerged in the 1970s and ‘80s. It wasn’t all bad; occasionally, it offered helpful facts, like defining “nits” during a discussion about displays. But overall, it was more distraction than aid.
Sean’s interest in Halo stemmed from a desire to “remember better,” a sentiment many share. Yet, the experience felt like Microsoft’s Clippy—constantly interrupting with irrelevant tidbits. For now, I’ll stick to my analog Post-its and to-do lists. Looking dumb by asking for clarification is better than bobbing my head like a puppet.
What do you think? Are AI glasses the future, or just a high-tech nuisance? Let me know in the comments—I’m eager to hear your take.