Better Than AirPods? Meta’s Smart Glasses Just Got a Major Audio Upgrade

Will Smith
7 Min Read

Meta is fundamentally shifting the utility of its smart glasses, moving beyond simple photography and into the realm of augmented hearing and contextual AI. On Tuesday, the company began rolling out its v21 software update for Ray-Ban Meta and Oakley models in the U.S. and Canada, a release that positions the device as a conversational aid, a Spotify DJ, and a hands-free fitness tracker.

The update is currently restricted to Meta’s Early Access Program, requiring users to navigate a waitlist for approval. While there is no confirmed timeline for a general release, the features signal Meta’s intent to compete directly with hearing health tech and dedicated sport wearables.

Solving the Cocktail Party Problem

The most significant addition in v21 is “Conversation Focus,” a feature aimed at the perennial audio engineering challenge of isolating a single voice in a crowded room. Meta’s technical lead, Andrew Bosworth, described the feature as “very handy to have for all the holiday parties,” noting its ability to amplify the person speaking directly to you.

The system utilizes dual beamforming microphones to create a roughly 60-degree cone of attention in front of the wearer. An on-device AI model suppresses ambient noise while boosting speech frequencies within that zone. Unlike static noise cancellation, the amplification dynamically adjusts to changing noise levels, though users can manually override the volume via the temple touch controls.

Early field tests suggest the feature is effective but situational. One San Francisco-based tester noted a distinct improvement in an 85-decibel restaurant environment:

It feels like someone turned down the restaurant and turned up my friend.

However, the technology has clear limitations. The audio lock can drift if the speaker moves outside the target zone or if multiple people speak simultaneously within the cone. Testers also reported disorientation in moving vehicles when amplification was set high.

Crucially, this is not a medical device. Unlike Apple’s AirPods Pro 2, which recently received FDA authorization for clinical-grade hearing aid functionality, Meta’s solution lacks regulatory clearance and personalized audiogram tuning.

Meta is walking a fine line. It can help with mild difficulties in noisy spaces, but it is not a medical device, and it’s not tuned to an individual’s hearing profile.

Dr. Lila Fernandez, an audiologist based in New York, warned that without clinical calibration or volume limiters, users—particularly older adults—might inadvertently rely on the device as a substitute for proper hearing healthcare.

Contextual AI and the ‘Red Sweater’ Glitch

The update also expands Meta’s visual AI capabilities through a new integration with Spotify. The feature allows users to prompt the glasses to “play something for this vibe,” triggering the camera to analyze the wearer’s surroundings and generate a playlist based on visual cues.

Meta claims this process relies on image recognition—identifying album art, scenery, or decor—without storing audio or location data, a necessary distinction given the company’s checkered history with user privacy.

However, the contextual engine is still prone to hallucinations. In independent testing, the AI struggled to differentiate between everyday objects and thematic cues. In one instance, a simple red sweater was misidentified as holiday decor, triggering Christmas music in October. Reliability in uncontrolled environments was reportedly near 50 percent.

Currently, this integration is exclusive to Spotify Premium subscribers, reinforcing Meta’s strategy of locking features behind paid partnerships rather than open developer ecosystems.

Oakley and the Risks of Hands-Free Navigation

For the Oakley Vanguard line, the v21 update introduces specific utility for endurance athletes. Runners and cyclists can now access real-time metrics—pace, cadence, and heart rate—and turn-by-turn navigation via voice command, eliminating the need to check a watch or phone.

The update also introduces hazard alerts. By combining camera data with sensor fusion, the glasses attempt to detect vehicles approaching from blind spots, issuing an audio warning. While promising, the implementation has raised safety concerns regarding latency.

Reports indicate the system has a processing delay of roughly 0.8 seconds. At cycling speeds of 20 mph, a rider travels nearly 24 feet in that time—a margin that safety advocates argue is too wide for urban traffic.

At city speeds, a fraction of a second is the difference between a scare and an impact. This can’t become a crutch.

A representative from a Seattle cycling advocacy group emphasized that while the bone-conduction audio allows riders to hear their surroundings, reliance on delayed digital warnings could be dangerous.

Privacy in the Always-On Era

As Meta’s hardware becomes more capable of filtering reality, privacy concerns are shifting from simple recording to data processing. While Meta asserts that Conversation Focus processing occurs locally on the device, the option for users to share “anonymized snippets” for model training remains a point of contention.

Furthermore, there is no external indicator that the glasses are actively processing and amplifying specific conversations. Unlike the LED light that signals video recording—a requirement for the hardware’s release—audio enhancement is invisible to bystanders.

There is no visible signal that your glasses are actively enhancing one person’s voice in a crowded café. That raises questions about informed consent from everyone else in the room.

European digital rights campaigners have already flagged this as a potential violation of the spirit, if not the letter, of EU privacy norms. Additionally, the World Health Organization has previously urged manufacturers of “hearables” to implement mandatory volume caps to prevent noise-induced hearing loss, a safeguard that appears absent in the current v21 iteration.

Meta is effectively staking its claim on the next frontier of wearables: audio augmented reality. By attempting to filter what users hear and see, the company is moving past the novelty of face-worn cameras and into the complex, regulated territory of sensory augmentation.

Share This Article
Follow:
At AwazLive, I focus on translating complex ideas into compelling stories that help audiences understand where technology is heading next. Always exploring, always curious, always chasing the next big shift in the tech world.