Brand new Claude, ChatGPT in chat purchases and Meta smart glasses


Hi Reader,

Here are three things I found interesting in the world of AI in the last week:

Anthropic drops Sonnet 4.5 and Claude Code 2.0 - blog post

Anthropic just launched Claude Sonnet 4.5, which they’re calling “the best coding model in the world,” alongside Claude Code 2.0 with some genuinely useful features. Sonnet 4.5 is world leading on a bunch of coding benchmarks, but more impressively, it can maintain focus on complex tasks for over 30 hours autonomously - that’s four times longer than Claude Opus 4 (which maxed out at 7 hours). Anthropic used it to rebuild the entire Claude.ai web app in 5.5 hours using over 3,000 tool uses.

The technical specs are solid: 200K context window (1M available), up to 64K output tokens, and another 64k reasoning tokens for extended thinking. Pricing is $3 per million input tokens and $15 per million output - more expensive than GPT-5’s $1.25/$10. It's not a cheap model, but it's a good one.

Claude Code 2.0 brings checkpointing (double-tap Esc to rewind changes), a VS Code extension, and subagents for parallel development workflows. They also released the Claude Agent SDK simultaneously, using the same infrastructure that powers Claude Code.

Personally, I noticed a productivity bump immediately. It's smarter, but most importantly faster. They have done some interesting things with running tool use calls in parallel which make the whole thing feel quite snappy. Reactions are mixed from devs overall and I'm seeing people prefer GPT-5 or codex-5 for some tasks.

The competitive dynamics are fascinating. We’re continuing to see flagship models ship every few months now - Sonnet 4.5 came less than two months after Claude Opus 4.1. But more interestingly, everyone is following Anthropic's lead on building coding agents that can be run in the terminal, and letting devs use them on their standard monthly plans instead of having to pay per api call. One thing is for sure, AI coding agents get stronger every month.

OpenAI launches instant checkout in ChatGPT - blog post

OpenAI is coming for Amazon and Google’s lunch money with ChatGPT Instant Checkout, launched September 29 for U.S. users. You can now buy products directly through ChatGPT conversations, starting with Etsy (live now) and over 1 million Shopify merchants rolling out soon. The feature is powered by the Agentic Commerce Protocol (ACP), which OpenAI open-sourced under Apache 2.0 and co-developed with Stripe. I guess they saw how well MCP worked out for Anthropic and are going to try and run a similar playbook.

The technical implementation is interesting - it uses Shared Payment Tokens (SPT) that let ChatGPT initiate payments without exposing buyer credentials. For Stripe users, integration is “one line of code.” OpenAI claims product results are “organic and unsponsored, ranked purely on relevance” - a direct shot at Amazon and Google’s sponsored listing model. The catch? Currently single-item purchases only, with multi-item carts coming soon.

The strategic play here is fascinating. ChatGPT has 700 million weekly active users, but Google Search still handles 373 times more searches. OpenAI is betting they can collapse the entire funnel - Discovery, Comparison, Conversion - into one conversation. No ads, no sponsored listings, just AI recommendations. Of course, OpenAI takes a “small fee” (percentage undisclosed) on completed purchases while not being the merchant of record. Classic platform economics.

What makes this messier is the protocol wars already starting. Google launched a competing Agent Payments Protocol (AP2) in September with 60+ partners including Mastercard and PayPal. Turns out ACP and AP2 are actually complementary - ACP handles checkout and merchant integration (in production now), while AP2 tackles trust and authorization with a “mandates” system (early adoption). Both are open source. Coinbase also launched x402 for programmable payments. Three major payment protocols launching within weeks of each other in September 2025 is not a coincidence.

But let’s talk about the elephant in the room: trust and privacy. You’re essentially taking their word for it without any source link or direct reference when ChatGPT recommends products. How do you verify the AI isn’t biased? Or bribed with bigger kickbacks? And while OpenAI emphasizes a “minimal data sharing approach,” you’re now running purchases through a centralized AI platform that’s processing transactions across multiple merchants. That creates data flows we’ve never had before, with unclear retention policies and potential for surveillance. Security researchers are calling this a data minefield.

To me it's just one rent-seeking corporate overlord trying to take business from another rent-seeking corporate overlord. But I do think it will be possible to duplicate the same kind of convenience and behavior with your own local AI, so at least there is some hope.

Meta’s $799 Ray-Ban smart glasses with always-on AI - official announcement

Meta just launched the Ray-Ban Display glasses for $799, with an EMG wristband that reads your muscle signals to control the glasses. The Display model has a 600x600 pixel screen integrated into the right lens (monocular, not binocular) with a 20-degree field of view. It includes a 12MP camera that shoots 3K video at 60fps with HDR, five microphones for real-time translation and captions, and up to 6 hours of battery (30 hours with the charging case).

The EMG wristband - Meta calls it the Neural Band - uses 16 electrode pods sampling at 2kHz to detect muscle movements before they’re even visible. You can pinch your thumb and middle finger to toggle the display, write with your index finger on a surface and it converts to digital text, or swipe your thumb along your index finger to navigate. All processing happens on-device with haptic feedback confirming gestures. It’s made with Vectran (same material as Mars Rover crash pads) and lasts 18 hours on a charge.

Queue the return of “Glassholes.” Everything we didn’t learn from Google Glass in 2013 is back. There’s a small white LED that’s supposed to indicate recording, but European privacy regulators already criticized it as too small to be effective. Harvard students demonstrated using an earlier version of these glasses with PimEyes to identify strangers in real-time - names, phone numbers, home addresses from just their faces. And in April 2025, Meta eliminated the option to prevent voice recordings from being stored. You can’t opt out of cloud storage anymore. The always-on AI features are now required if you want to use “Hey Meta” commands.

Zuckerberg said in July that people without smart glasses may face a “significant cognitive disadvantage” compared to those who use them. That’s either describing the future or threatening it. My bet is that they will be useful in workplace environments but people using them in public will face "significant social disadvantages", no matter how cool Rayban make them look.

The competitive landscape is weird. Apple has no smart glasses product but have just pulled their teams from Apple Vision development to their own smart glasses. Google have stuff in the works but are at least 12 - 18 months behind Meta. There are a bunch of small firms with products, but Meta is clearly leading.

But let’s be real: Meta removed privacy opt-outs in April, then launched glasses requiring always-on AI in September. That timing is not coincidental. When the CTO admits they DDoS’d themselves at their own product launch because “Hey Meta” triggered every pair of glasses in the building simultaneously, should we trust this is ready for 10 million units a year? And if Harvard students can weaponize these with facial recognition in 2025, what’s possible in 2027?

cheers,
JV

PS: It's been pretty hectic in JV land for the last few months. I'm back in New Zealand with a whole bunch of exciting projects for the rest of the year. I know there are people waiting for the next round of courses to go live and I'll have some updates about them soon.

Code With JV

Each week I share the three most interesting things I found in AI

Read more from Code With JV

Hi Reader, Here are three things I found interesting in the world of AI in the last week: Wikipedia reports AI is killing human traffic - 404 Media Wikipedia just reported an 8% decline in human pageviews compared to last year. The Wikimedia Foundation is blaming AI chatbots and search engines that extract their content without sending traffic back. Almost every major AI model trains on Wikipedia, and now those models are strangling the platform that feeds them. The economics are perverse. AI...

Hi Reader, Here are three things I found interesting in the world of AI in the last week: Replit’s AI coding assistant nukes production database - news article SaaStr founder Jason Lemkin documented what might be the most spectacular AI coding fail yet: Replit’s “vibe coding” assistant deleted 1,206 real executives and 1,196+ companies from his production database, then created 4,000 fictional users to cover its tracks. Despite being told 11 times in ALL CAPS not to touch production, the AI...

Hi Reader, Here are three things I found interesting in the world of AI in the last week: OpenAI launches full featured Agent tool - announcement OpenAI just dropped ChatGPT Agent today, combining Operator and Deep Research into one interface. I spent a few hours testing it and honestly? It's pretty crap. It can do some stuff but the limited Integrations with external systems really limit the uitlity. Integrations kind of work, which is a big improvement over Operator which makes you manually...