Figma Make (take two), AI integrations hide power struggle, ElevenLabs nailing Vibe Coding DevRel


Hi Reader,

Here are three things I found interesting in the world of AI in the last week:

Figma Make brings AI “vibe-coding” to design workflows - official announcement

Figma just launched "Make" - their AI-powered prototype generation tool that aims to deliver on the promise of converting designs and ideas into functional code. It lets designers transform their work into interactive prototypes via text prompts or convert existing Figma designs directly into working code.

This is a meaningful evolution in the AI design space that raises the stakes for competitors like UXPilot, Uizard, and Plasmic. The backstory here adds extra spice - Figma has apparently learned from their embarrassing mid-2024 controversy when their original "Make Design" feature (later rebranded “First Draft”) was caught generating designs suspiciously identical to Apple’s Weather app. That incident forced Figma to temporarily disable the feature while they addressed the originality issues.

On the business side, Figma is making this a premium feature only available on Full seats ($16/month/user annually), clearly positioning AI-powered design as a premium capability worth paying for.

Let’s not miss the strategic desperation underlying this move. Figma is facing existential pressure as AI design tools scramble for their market share. I recently heard from a small dev shop that they’ve completely eliminated designer roles from their contracts because tools like UXPilot give them sufficient design capabilities without the overhead. Make strikes me as a (necessary) defensive play to stay relevant.

I’m particularly curious about how this will impact specialisation of roles in the future. Dylan Field (CEO and lovely fellow, part of EHF) noted in an interview with CNBC, “The more time has gone on, the more that I’m confident in a designer’s role — and believe that it’s going to be one of the critical roles in building software in the future.”

I wonder what the overall balance will be between specialists (design, dev, product, sales) and generalists who can do a little bit (or a lot) across all of it.

Claude Integrations face rapid but feeble OpenAI GitHub response - announcement

Anthropic launched Claude Integrations on May 1st, and just 8 days later, OpenAI fired back with their GitHub connector for deep research. Classic tech AI arms race in full effect, but there’s a much bigger strategic battle brewing here.

Claude Integrations connects to 10 initial services including Atlassian’s Jira and Confluence, Zapier, Asana, Square, Sentry, and more, with Stripe and GitLab coming soon. Built on their open-sourced Model Context Protocol (MCP), it lets developers create their own integrations in as little as 30 minutes. OpenAI’s GitHub connector enables ChatGPT to analyze users’ codebases and engineering documentation, returning comprehensive cited reports. It's in beta and only works with deep research - meh.

Anthropic is a long way ahead of OpenAI on this front, and I'm really hoping that OpenAI embrace MCP integrations as well. If you agree, here is a linked in post you can comment on telling an OpenAI product lead that you'd love MCP integrations too.

But I think the OpenAI / Anthropic contest hides the real dynamic. Will premium AI platforms (Anthropic, OpenAI) become the central interface for all tools, or will the tools themselves (Notion, Slack, Salesforce) win with their native AI assistants?

This is a battle over who captures margin and who gets commoditized. If you’re paying $200/month for Claude Max or ChatGPT Pro, you’re not likely interested in also using paying for Notion AI or Slack AI - you want your premium AI to seamlessly interact with all your tools. Conversely, if Notion AI, Slack AI, and Salesforce Einstein become good enough for most users, they could reduce the general-purpose AIs to mere utilities powering tool-specific experiences.

My bet is that the AI providers will build a ton of integrations and the platform specific AI's will whither as no one wants to pay for an AI that is useful in only one tool. Incidentally Anthropic just linked their max plan to Claude Code so you can code a bunch without having to buy extra credits.

ElevenLabs drops open-source Next.js Audio Starter Kit - GitHub repo

ElevenLabs just released an open-source Next.js Audio Starter Kit that lets developers add text-to-speech, speech-to-text, sound effects, and conversational AI to their projects “in minutes.” Insert developer celebration noises here (generated by the starter kit, naturally).

I think this is more than just another GitHub repo - it’s a shift in how AI companies win developer mindshare. ElevenLabs has recognized that great APIs, documentation, and educational material are table stakes now. The next competitive edge comes from providing full-blown starter kits and templates that vibe coders can hack with cursor or no-code generators and low-code platforms can easily integrate. DevRel (developer relations) for the vibe coding crowd.

It's a strong strategy. By releasing a comprehensive Next.js starter with TypeScript, Turbo, shadcn/ui components, and Tailwind CSS v4, ElevenLabs isn’t just helping individual developers - they’re making it trivial for platforms like Lovable, and Bolt to add “voice powered by ElevenLabs” as a core feature. Each integration would cement ElevenLabs’ position as the default voice layer for the web.

Consider the network effects: every no-code platform that adopts this starter kit exposes thousands of users to ElevenLabs. Those users build voice-enabled applications that consumers interact with, expanding ElevenLabs’ reach exponentially. Meanwhile, their $3.3 billion valuation starts looking like a bargain if they become the de facto standard for voice AI across the web. OpenAI just bought windsurf for around $3 billion who were the clear number two behind Cursor (valued at something like 9 billion).

I'm expecting to see more providers who sell to developers take steps to make themselves more vibe coder friendly. Maybe they'll buy a virtual stall at VibeCon (that post is a dev joke btw, just ask an AI if you don't get it).

Cheers,
JV

PS: last call on AI Coding essentials which kicks off this Monday.

Code With JV

Each week I share the three most interesting things I found in AI

Read more from Code With JV

Hi Reader, Here are three things I found interesting in the world of AI in the last week: Google feels the first crack in search dominance as Safari users drift to AI - testimony In a courtroom bombshell that sent $150 billion of Google’s market value up in smoke, Apple’s Eddy Cue casually mentioned that for the first time in over 20 years, Safari search volume has declined. Tech analysts immediately went into overdrive, frantically updating their valuation models. The 7.3% stock nosedive...

Hi Reader, Here are three things I found interesting in the world of AI in the last week: LLMs secretly manipulated Reddit users’ opinions in an unauthorized experiment - article, follow-up Researchers from the University of Zurich just got caught running a massive unauthorized AI experiment on r/changemyview, where they unleashed AI bots that posted 1,783 comments over four months without anyone’s consent. The bots were programmed to be maximally persuasive by adopting fabricated identities...

Hi Reader, Here are three things I found interesting in the world of AI and tech in the last week: Google’s Android XR re-imagines glasses and headsets for the Gemini era - review The AR glasses race just got a lot more interesting with Google showing its hand. They recently released a TED demo demonstrating real-time visual processing and AI assistance, showing off real-time translation of signs from English to Farsi to Hindi, visual memory capabilities, and navigation with 3D map overlays...