Google XR glasses, AI cheating on interviews, and a new agent library


Hi Reader,

Here are three things I found interesting in the world of AI and tech in the last week:

Google’s Android XR re-imagines glasses and headsets for the Gemini era - review

The AR glasses race just got a lot more interesting with Google showing its hand. They recently released a TED demo demonstrating real-time visual processing and AI assistance, showing off real-time translation of signs from English to Farsi to Hindi, visual memory capabilities, and navigation with 3D map overlays directly in the wearer’s field of view.

It's all part of Android XR which was launched last December and is has a somewhat naff slogan of "the first platform built entirely for the Gemini era." But I actually think it is bang on. AI does completely change the game in the whole "computers on your face" category. The Verge’s Victoria Song, who tested an early version of Android XR described feeling “as close to Tony Stark in a controlled demo as I’ll ever be” with Gemini functioning as her “Jarvis.”

Google is opting to rely on external hardware partners so we'll see a VR set (Samsung's Project Moohan) before the glasses hit the market. Meanwhile, Meta’s Ray-Ban glasses have been selling well (bestsellers in 60% of European(ish) Ray-Ban stores according to EssilorLuxottica’s earnings call last October). Meta and Ray-Ban have extended their partnership into the 2030s and are expected to release glasses later in 2025 with a built in display, but from what I've seen of Gemini and Llama 4, Google will crush them on the AI front.

Apple is also working on glasses with reports that it is "the only thing Tim Cook spends time on from a product development standpoint". Earliest date of release would be 2027, and Apple's AI situation is worse than Meta's.

I think there is a real chance that AI powered glasses will be the iPhone moment of this decade, and if I had to bet right now, my money would be on google. While it will face the same "are your glasses recording me" challenge of Google's initial foray into smart glasses a decade ago, my gut feel is that we are in for less of a "glassholes v2" and more of a "new era of computing". But not for a couple of years.

Cluely AI launches as “undetectable assistant” with ethical controversy - news article

This company is so repellent that I'm reluctant to link to them, but I did find it interesting. The ex-Columbia students who got expelled for using AI to cheat on coding interviews just raised over $5 million for Cluely, a tool explicitly designed to help people “cheat at everything.” The app quietly runs on your screen during video calls, monitors conversations, and feeds you real-time answers without the other participants knowing.

They know how to use controversy to sell and published a cringe video to really emphasize the "cheat on everything" tagline. While they seem dead set on add tarnish to the reputation of the AI industry (lol), I think it is more evidence of just what is technically possible. Real time, multi-modal AI that can see, hear and respond fast enough to be useful is here and not too expensive to build things with. The potential applications are limitless.

Sure, pretending you're something you're not and hiding AI use is boring and scammy, but that doesn't make the tech less useful for legitimate applications. It will just put a bunch of people off and slow down the adoption, which maybe isn't a bad thing if you're feeling entrepreneurial.

Google launches Agent Development Kit (ADK) for multi-agent AI systems - announcement

Google just released their Agent Development Kit - an open-source framework designed specifically for building multi-agent AI systems. I wouldn't call myself a fan yet, but I've spent a bunch of time with it it passes an initial sniff test. It's proper open source, I can use it with non google models and systems, integrates with MCPs, and I don't mind a bunch of the choices they made.

Google went really heavy on agents at their annual cloud conference two weeks ago. I've reviewed all the videos and a lot of them made me wince but I think they are a sign of things to come. My picks are customer service agent, agent space, this data science agent, MLB hackathon winner.

What makes ADK particularly interesting is its modularity and flexibility. Unlike some agent frameworks that lock you into specific models, ADK lets you use Gemini models or any model from Vertex AI’s Model Garden, plus it has integration with LiteLLM for accessing models from Anthropic, Meta, Mistral and others. I’m particularly interested in its hierarchical approach to agent design - you can create specialized agents that handle specific tasks and then compose them into systems where agents delegate to each other.

The toolkit comes with pre-built tools for common tasks like search and code execution, but also supports the Model Context Protocol (MCP) introduced by Anthropic, integrates with other frameworks like LangChain and CrewAI, and even supports bidirectional audio and video streaming. Google’s clearly betting that multi-agent systems are the next evolution of AI apps, and the same framework is already powering Google’s own products. I’m seeing this as a direct challenge to OpenAI’s Agents SDK and Amazon’s Agents on Bedrock - the battle for who controls the agent ecosystem is heating up fast.

cheers,

JV

Code With JV

Each week I share the three most interesting things I found in AI

Read more from Code With JV

Hi Reader, Here are three things I found interesting in the world of AI in the last week: Google feels the first crack in search dominance as Safari users drift to AI - testimony In a courtroom bombshell that sent $150 billion of Google’s market value up in smoke, Apple’s Eddy Cue casually mentioned that for the first time in over 20 years, Safari search volume has declined. Tech analysts immediately went into overdrive, frantically updating their valuation models. The 7.3% stock nosedive...

Hi Reader, Here are three things I found interesting in the world of AI in the last week: Figma Make brings AI “vibe-coding” to design workflows - official announcement Figma just launched "Make" - their AI-powered prototype generation tool that aims to deliver on the promise of converting designs and ideas into functional code. It lets designers transform their work into interactive prototypes via text prompts or convert existing Figma designs directly into working code. This is a meaningful...

Hi Reader, Here are three things I found interesting in the world of AI in the last week: LLMs secretly manipulated Reddit users’ opinions in an unauthorized experiment - article, follow-up Researchers from the University of Zurich just got caught running a massive unauthorized AI experiment on r/changemyview, where they unleashed AI bots that posted 1,783 comments over four months without anyone’s consent. The bots were programmed to be maximally persuasive by adopting fabricated identities...