Here are three things I found interesting in the world of AI in the last week: 1. MoltBot is wildly popular and wildly insecureMoltBot (formerly ClawdBot, renamed after Anthropic sent a trademark demand) is an open-source AI agent gateway by Peter Steinberger, the founder of PSPDFKit. It connects Claude, OpenAI, or local models to 13+ messaging platforms including WhatsApp, Telegram, Slack, Discord and iMessage, with full system access: shell execution, file read/write, browser automation, cron jobs. Think "Claude with hands." It has 70,000+ GitHub stars and is in full hockey stick hype mode. Everywhere I look people are talking about it. The problem is that the security posture doesn't match the adoption. Security researchers found over 1,000 instances exposed on the public internet, eight of which had no authentication at all. The default config trusts localhost connections, so anyone running it behind a reverse proxy (which is most people) inadvertently bypassed auth entirely. That bug has been patched, but the deeper issues remain: plaintext credential storage vulnerable to infostealer malware, an unmoderated skills marketplace where a researcher inflated fake download counts to 4,000+ as a proof of concept, and a demonstrated prompt injection attack where a researcher extracted a private key from a compromised instance in five minutes. Google Cloud VP Heather Adkins advised people not to install it at all. I agree. If you're going to anyway: Tailscale-only access, Docker sandboxing, audit every skill manually, and run it on dedicated hardware. If that doesn't make sense then definitely don't install it. The core problem is architectural: any system that processes untrusted input (emails, web pages, public chat) and also has the ability to take actions (shell, file access, messaging) is fundamentally vulnerable. Telling the AI "don't do bad things" doesn't work. You need to separate the systems that read untrusted content from the ones that can act. That tension isn't going away. But I get why people are excited. I've been building a lot of agentic workflows over the last few months and once you taste what it's like for AI to really accomplish stuff you delegate (and watch it fail specactacularly) there is no going back. Every time I use software where I click buttons and type stuff I'm asking myself how I could get my AI to do this for me. 2. OpenAI is putting ads in ChatGPTOn January 16th, OpenAI officially announced that ads are coming to ChatGPT for free and Go ($8/month) tier users in the U.S. Paid tiers (Plus, Pro, Business, Enterprise) stay ad-free. The ads appear at the bottom of responses, labeled as sponsored, and are triggered by conversational context. The numbers tell the story of why. Only 5% of ChatGPT's 800 million weekly users pay, and OpenAI has $1.4 trillion in infrastructure commitments to service. Internal targets are $1 billion in ad revenue for 2026, scaling to $25 billion by 2029. Early pricing is reportedly around $60 CPM which puts it at live NFL broadcast rates and 3x Meta's average. No conversion tracking. Advertisers get impressions and clicks, nothing else. The quotes in that last link are particlarly brutal (and funny) and my take away is that you're ever a famous CEO of a fast growing company never say anything ever. Probably bad advice but that's my take away regardless! This was already foreshadowed in December when promotional messages for Peloton and Target showed up in ChatGPT responses, even for $200/month Pro subscribers. OpenAI's Chief Research Officer Mark Chen admitted "we fell short" and disabled the feature within days. The bigger concern is whether ads will influence responses. OpenAI says "answer independence" is a core principle, but the December incident already showed that the line between "suggestion" and "ad" is blurry in practice. Meanwhile, Google says Gemini won't have ads and can afford to hold that line indefinitely from existing search revenue. Anthropic has no ad plans either. If ChatGPT's experience degrades, the alternatives are right there. I've seen a bunch of commentary that this is a move of desparation, but I dunno. Seems to me that people have gotten so used to advertising that most free users won't blink. It's easy for people who pay attention (and read newsletters like this) to lose touch with what drives the 900 million people who use ChatGPT each week. 3. Claude wants to turn into a workspaceTwo days ago, Anthropic launched MCP Apps: third-party tools like Slack, Figma, Canva, Asana, and monday.com that render interactive UIs directly inside Claude's chat window. You link your accounts and Claude can send Slack messages, create Figma diagrams, or build Canva presentations without leaving the conversation. It's available to all paid users at no extra cost. MCP Apps is built on the Model Context Protocol, which is an open standard. It's not Claude-exclusive. The same integrations will work with ChatGPT, VS Code, and Goose. Anthropic is betting that owning the protocol layer matters more than locking in the integrations. Previously MCP only let Claude fetch data from external services. Now it can display and operate full interfaces through iframed UIs with sandbox isolation. Smart move in my opinion. Lots of companies have already made MCP servers and it's a small step to bundle a little bit of UI with it. Fascinating direction of travel if this gets adoption. Like most things the key will be if OpenAI and Gemini implement the standard. This reminds me of how browsers evolved back in the day. One team implements something and then it's a question of who else will follow suite. Only now it's ChatGPT, Claude and Gemini - anything all of them agree on will be a standard by default. cheers, JV PS: I've got a new website and blog PPS: I've also upgraded the CodeWithJV site - we've booked about 70% of the teams capacity for the year (which is the highest I've started a year with) but I'd love to offer more internships to junior devs. So doors are open for new client conversations. |
Each week I share the three most interesting things I found in AI
Here are three things I found interesting in the world of AI in the last week: 1. Grammarly turned expert identity into a product - Nieman Lab / TechCrunch Grammarly launched a paid feature that gave users feedback "from" named experts like Julia Angwin, Casey Newton, Kara Swisher and Stephen King. The experts had not agreed to this. The feedback was AI-generated, the product was charging for it, and the disclaimer saying it was not actually endorsed by those people was buried in the fine...
Here are three things I found interesting in the world of AI in the last week: 1. The "best AI model" era is over - Every.to / Digital Applied comparison OpenAI launched GPT-5.4 on March 5 to the usual amount of noise. Self-described Claude loyalists are excited. Augment Code made it their default model, calling it "a reliable orchestrator" that uses 18-20% fewer tokens on complex tasks. The headline number: 75% on OSWorld, the first frontier model to beat human experts (72.4%) at autonomous...
Instead of a newsletter this week I thought I'd experiment with a longer form email on an idea that I think is worth sharing. Let me know what you think and if you want more / less of this format. In my head I've been calling this the 'single good idea'. One of my favourite questions to ask people is "what is a new thing you've recently done with AI", often followed up by a "what do you want to be able to do next". It's a pretty quick way to find out where their learning edge is. "I have a...