Invisible supply chain attacks, Sora's $15M/day bonfire, and domain experts build their own tools


Here are three things I found interesting in the world of AI in the last week:

1. AI-generated commits are the new supply chain attack vector - Aikido Security / The Hacker News

Between March 3 and March 9, a threat actor called Glassworm injected malicious code into 151 GitHub repositories, 88 npm packages, and 72 VS Code extensions. The payloads were encoded using Unicode variation selectors that render as nothing. Not obfuscated. Invisible. Your editor, your terminal, your code review interface, none of them show anything.

The invisible Unicode trick itself is old news. Cambridge researchers published "Trojan Source" in 2021, showing how Unicode could make source code behave differently from how it reads. What's new is the delivery. The injections came wrapped in version bumps, documentation tweaks, and small refactors that were stylistically consistent with each target project. Security researchers at Aikido noted the cover commits strongly suggest LLM generation. Attackers used AI to write pull requests that look exactly like something a real contributor would submit.

That is the part that should bother you. The Unicode payload is the bullet, but AI-generated camouflage commits are the silencer. It means "review the diff" is no longer a reliable security control against a motivated attacker. The cover story is too good.

Once decoded, the payloads used Solana blockchain transactions as dead-drop resolvers for command-and-control servers, then exfiltrated credentials, environment variables, CI/CD tokens, and crypto wallet contents. Some of the targeted repos are dependencies for enterprise software. One compromised package can propagate through thousands of production systems.

If you maintain open-source packages: `grep` will not find this. The malicious characters are literally zero-width. You need tooling that flags Unicode anomalies in source files. If you consume open-source dependencies: pinning versions and running supply chain scanners like Socket or Aikido just moved from "nice to have" to "table stakes."

I talk a lot about AI helping developers write code faster. Turns out it also helps attackers write better disguises.

2. OpenAI shut down Sora after burning $15 million a day - Axios / Variety

Sora (the app version) is dead. OpenAI's AI video app lasted six months. In that time it hit #1 on the App Store, peaked at 3.3 million downloads in November, and generated $2.1 million in total revenue.

It also cost roughly $15 million per day to run. That is about $5.4 billion annualized, or roughly $1.30 per 10-second clip.

Sora's own head, Bill Peebles, said the economics were "completely unsustainable" back in October. They ran it five more months anyway. Downloads fell 67% by February. Users kept generating deepfakes of dead celebrities faster than moderation could catch them.

Disney had a $1 billion investment deal tied to Sora. According to Variety, they found out it was being killed 30 minutes after a working meeting with OpenAI. No money had changed hands. One source called it "a big rug-pull."

OpenAI says the team will pivot to robotics and the GPU budget will go to ChatGPT, enterprise tools, and reasoning models. Things that actually make money.

The lesson is not that AI video failed. Seven models already outperform Sora. The lesson is that impressive demos do not guarantee a business. Compute is expensive, novelty fades, and even OpenAI cannot fund everything. If you are building on top of any AI product right now, Disney's experience should be on your risk register. The platform you depend on might be someone else's cost-cutting decision away from disappearing.

3. When the fuel crisis hit, domain experts built their own tools - nzoilwatch.com / nz-fuel.netlify.app

Like many places, NZ has a bad case of the car-owner virus. (I couldn't resist sharing)

When the Strait of Hormuz crisis started threatening NZ fuel supplies, live fuel tracking dashboardsstarted popping up. The two we know about are finance professionals, not software developers.

Kael De Herrera (CFA, 16 years in financial markets) built nzoilwatch.com with Perplexity Computer. Three prompts. It has real-time AIS vessel tracking, reserve gauges, and a countdown to depletion. Brian Kearney (Head of OCIO at Shaw and Partners) built nz-fuel.netlify.app with AI assistance and scenario modelling. Already on Version 10. A third team shipped fuelwatch.nz.

How many developer jobs did these replace? Zero. No one was going to commission them. No product manager would have prioritised this. The market did not exist three weeks ago.

This is Jevons paradox playing out in real time. When the cost of building software drops far enough, you don't get the same software cheaper. You get software that was never going to exist at all. Domain experts who deeply understand the problem can now build the solution directly, without waiting for a developer to translate their requirements into something approximate.

At least one of these dashboards contradicts the Government's official fuel estimate by about six days. NZ closed its only refinery (Marsden Point), imports 100% of refined fuel, and over 70% of that comes from just three countries. When the Strait of Hormuz gets disrupted, we are literally counting days. These are not academic exercises. They are citizen-built accountability tools, and they exist because the barrier to building them finally dropped below the threshold where someone who cares enough can just do it.

The question everyone keeps asking about vibe coding is "will AI replace developers?" The better question is "what happens when the people who actually understand the problem can build their own tools?"

cheers,

JV

PS: Agentic Coding Essentials is kicking off this week with a few spots for last minute enrollments. Live calls are 12.30pm on Thursdays and self pace version will be open at the end of the month. I'll keep enrollments for the plus tier open until this Friday.

Code With JV

Each week I share the three most interesting things I found in AI

Read more from Code With JV

Here are three things I found interesting in the world of AI in the last week: 1. Grammarly turned expert identity into a product - Nieman Lab / TechCrunch Grammarly launched a paid feature that gave users feedback "from" named experts like Julia Angwin, Casey Newton, Kara Swisher and Stephen King. The experts had not agreed to this. The feedback was AI-generated, the product was charging for it, and the disclaimer saying it was not actually endorsed by those people was buried in the fine...

Here are three things I found interesting in the world of AI in the last week: 1. The "best AI model" era is over - Every.to / Digital Applied comparison OpenAI launched GPT-5.4 on March 5 to the usual amount of noise. Self-described Claude loyalists are excited. Augment Code made it their default model, calling it "a reliable orchestrator" that uses 18-20% fewer tokens on complex tasks. The headline number: 75% on OSWorld, the first frontier model to beat human experts (72.4%) at autonomous...

Instead of a newsletter this week I thought I'd experiment with a longer form email on an idea that I think is worth sharing. Let me know what you think and if you want more / less of this format. In my head I've been calling this the 'single good idea'. One of my favourite questions to ask people is "what is a new thing you've recently done with AI", often followed up by a "what do you want to be able to do next". It's a pretty quick way to find out where their learning edge is. "I have a...