Hi Reader, Here are three things I found interesting in the world of AI in the last week: Figma Make brings AI “vibe-coding” to design workflows - official announcement Figma just launched "Make" - their AI-powered prototype generation tool that aims to deliver on the promise of converting designs and ideas into functional code. It lets designers transform their work into interactive prototypes via text prompts or convert existing Figma designs directly into working code. This is a meaningful evolution in the AI design space that raises the stakes for competitors like UXPilot, Uizard, and Plasmic. The backstory here adds extra spice - Figma has apparently learned from their embarrassing mid-2024 controversy when their original "Make Design" feature (later rebranded “First Draft”) was caught generating designs suspiciously identical to Apple’s Weather app. That incident forced Figma to temporarily disable the feature while they addressed the originality issues. On the business side, Figma is making this a premium feature only available on Full seats ($16/month/user annually), clearly positioning AI-powered design as a premium capability worth paying for. Let’s not miss the strategic desperation underlying this move. Figma is facing existential pressure as AI design tools scramble for their market share. I recently heard from a small dev shop that they’ve completely eliminated designer roles from their contracts because tools like UXPilot give them sufficient design capabilities without the overhead. Make strikes me as a (necessary) defensive play to stay relevant. I’m particularly curious about how this will impact specialisation of roles in the future. Dylan Field (CEO and lovely fellow, part of EHF) noted in an interview with CNBC, “The more time has gone on, the more that I’m confident in a designer’s role — and believe that it’s going to be one of the critical roles in building software in the future.” I wonder what the overall balance will be between specialists (design, dev, product, sales) and generalists who can do a little bit (or a lot) across all of it. Claude Integrations face rapid but feeble OpenAI GitHub response - announcement Anthropic launched Claude Integrations on May 1st, and just 8 days later, OpenAI fired back with their GitHub connector for deep research. Classic tech AI arms race in full effect, but there’s a much bigger strategic battle brewing here. Claude Integrations connects to 10 initial services including Atlassian’s Jira and Confluence, Zapier, Asana, Square, Sentry, and more, with Stripe and GitLab coming soon. Built on their open-sourced Model Context Protocol (MCP), it lets developers create their own integrations in as little as 30 minutes. OpenAI’s GitHub connector enables ChatGPT to analyze users’ codebases and engineering documentation, returning comprehensive cited reports. It's in beta and only works with deep research - meh. Anthropic is a long way ahead of OpenAI on this front, and I'm really hoping that OpenAI embrace MCP integrations as well. If you agree, here is a linked in post you can comment on telling an OpenAI product lead that you'd love MCP integrations too. But I think the OpenAI / Anthropic contest hides the real dynamic. Will premium AI platforms (Anthropic, OpenAI) become the central interface for all tools, or will the tools themselves (Notion, Slack, Salesforce) win with their native AI assistants? This is a battle over who captures margin and who gets commoditized. If you’re paying $200/month for Claude Max or ChatGPT Pro, you’re not likely interested in also using paying for Notion AI or Slack AI - you want your premium AI to seamlessly interact with all your tools. Conversely, if Notion AI, Slack AI, and Salesforce Einstein become good enough for most users, they could reduce the general-purpose AIs to mere utilities powering tool-specific experiences. My bet is that the AI providers will build a ton of integrations and the platform specific AI's will whither as no one wants to pay for an AI that is useful in only one tool. Incidentally Anthropic just linked their max plan to Claude Code so you can code a bunch without having to buy extra credits. ElevenLabs drops open-source Next.js Audio Starter Kit - GitHub repo ElevenLabs just released an open-source Next.js Audio Starter Kit that lets developers add text-to-speech, speech-to-text, sound effects, and conversational AI to their projects “in minutes.” Insert developer celebration noises here (generated by the starter kit, naturally). I think this is more than just another GitHub repo - it’s a shift in how AI companies win developer mindshare. ElevenLabs has recognized that great APIs, documentation, and educational material are table stakes now. The next competitive edge comes from providing full-blown starter kits and templates that vibe coders can hack with cursor or no-code generators and low-code platforms can easily integrate. DevRel (developer relations) for the vibe coding crowd. It's a strong strategy. By releasing a comprehensive Next.js starter with TypeScript, Turbo, shadcn/ui components, and Tailwind CSS v4, ElevenLabs isn’t just helping individual developers - they’re making it trivial for platforms like Lovable, and Bolt to add “voice powered by ElevenLabs” as a core feature. Each integration would cement ElevenLabs’ position as the default voice layer for the web. Consider the network effects: every no-code platform that adopts this starter kit exposes thousands of users to ElevenLabs. Those users build voice-enabled applications that consumers interact with, expanding ElevenLabs’ reach exponentially. Meanwhile, their $3.3 billion valuation starts looking like a bargain if they become the de facto standard for voice AI across the web. OpenAI just bought windsurf for around $3 billion who were the clear number two behind Cursor (valued at something like 9 billion). I'm expecting to see more providers who sell to developers take steps to make themselves more vibe coder friendly. Maybe they'll buy a virtual stall at VibeCon (that post is a dev joke btw, just ask an AI if you don't get it). Cheers, PS: last call on AI Coding essentials which kicks off this Monday. |
Each week I share the three most interesting things I found in AI
Hi Reader, Here are three things I found interesting in the world of AI in the last week: OpenAI launches full featured Agent tool - announcement OpenAI just dropped ChatGPT Agent today, combining Operator and Deep Research into one interface. I spent a few hours testing it and honestly? It's pretty crap. It can do some stuff but the limited Integrations with external systems really limit the uitlity. Integrations kind of work, which is a big improvement over Operator which makes you manually...
Hi Reader, Here are three things I found interesting in the world of AI in the last week: Meta’s AI shopping spree: Buy Scale AI, hire everyone else - report Meta just announced they’re creating "Meta Superintelligence Labs" and backing it up with the kind of money that makes people cry. They dropped $14.3 billion for 49% of Scale AI (more than doubling its valuation overnight) and hired at least 11 researchers from OpenAI with compensation packages that reportedly hit $100 million signing...
Hi Reader, Here are three things I found interesting in the world of AI in the last week: OpenAI’s o3 model refuses to shut down (even when explicitly told to) - research study I'm oh so tempted to make a Terminator reference, but this is really a story about unintended consequences rather than robot rebellion. Palisade Research discovered that OpenAI’s o3 reasoning model actively sabotages shutdown mechanisms, even when given explicit instructions to “allow yourself to be shut down.” The...