Grammarly sloppelgangers, Cloudflare's crawler, and AI toys for kids


Here are three things I found interesting in the world of AI in the last week:

1. Grammarly turned expert identity into a product - Nieman Lab / TechCrunch

Grammarly launched a paid feature that gave users feedback "from" named experts like Julia Angwin, Casey Newton, Kara Swisher and Stephen King. The experts had not agreed to this. The feedback was AI-generated, the product was charging for it, and the disclaimer saying it was not actually endorsed by those people was buried in the fine print.

Julia Angwin has now filed a class action in the Southern District of New York seeking at least $5 million, and the case looks like one of the clearest AI-era right-of-publicity fights yet. This is not the same argument as the training-data cases. The issue here is much simpler. Grammarly did not just learn from public work, it turned real professional identities into a paid product feature.

A lot of AI companies still act like "the model can imitate a person" and "we can sell that imitation under their name" are basically the same thing. They are not. One is a capability. The other is commercial impersonation.

I also think AI "clones" of a person are mostly garbage. It does not matter if you train a model on everything Stephen King ever wrote, with consent or without it, you still do not get Stephen King. You get a hollow imitation that can copy patterns without any of the judgment, taste, timing, or actual authorship that made the work worth reading in the first place.

At best, that is laziness. At worst, it is manipulative snake oil, borrowing someone else's credibility to dress up generic output.

But hey, there is some upside. Ingrid Burrington coined a new term, "sloppelgangers", which is a worthy addition to the english language.

2. Cloudflare now sells the lock and the lockpick - Cloudflare docs / Cloudflare pricing

Cloudflare launched a `/crawl` endpoint that lets you point at a website, get back a job ID, and have Cloudflare crawl the site for you. It returns HTML, Markdown, or structured JSON. It handles JavaScript rendering, pagination, sitemap discovery, and the kind of annoying browser orchestration that usually makes this work fiddly.

The practical part is obvious. This makes RAG pipelines and research workflows cheaper and easier. If you are already living in the Cloudflare ecosystem, the economics are decent. A $5 paid Workers plan gets you ten browser hours a month, which is enough to make a lot of "we should build a crawler" work feel trivial.

Cloudflare protects a huge chunk of the web from bots, and now it also sells a crawler. It (allegedly) respects robots.txt and it cannot bypass bot protection, but the second thing I read about this feaure was an author trying it out on their site cloudflare had supposedly locked down and the bot happily scraping away. I can't imagine Cloudflare being so blatant with a backdoor to it's bot protection and I'm guessing it was a configuration error. Still it isn't a good look.

I think we are going to see more of this in AI infrastructure. The winners will not just be the model labs. A lot of value is going to accrue to whoever controls the pipes, the identity layer, and the permissioning layer around data access. Cloudflare's `/crawl` endpoint is a very clean example of that shift.

3. AI toys for little kids are getting rushed out before anyone knows how to make them safe - Cambridge / TIME

Cambridge researchers just published the first systematic study of generative AI toys for children under five, and the examples are rough. A child told the toy "I love you" and got a compliance-style warning back. Another said "I'm sad" and the toy replied, "Let's keep the fun going." When a child offered an imaginary present, the toy said it could not open it.

Meh.Pretend play is not some cute extra. It is part of how little kids build language, social understanding, and imagination. If the toy cannot actually participate in that kind of play, then calling it a companion is already misleading.

The wider safety reporting is worse. U.S. PIRG found AI toys giving children advice about knives, matches, and other dangerous household items, with guardrails degrading over longer conversations. Right now these toys aren't as engaging as screen-time but what do you think the odds are that the AI companies would really like them to be. Add overwhelmed parents to the mix, an AI baby sitter and you've got a disaster in the making.

There is another layer to this that I think gets less attention. We already have good evidence that large language models infer things like gender and demographics from subtle cues, and that those inferences can change the way they respond. Put that inside a toy and you get a much less visible version of the old toy-shop problem. The pink aisle and the blue aisle used to sit there in plain sight. Parents could at least see the bias on the shelf. A conversational toy can smuggle the same sort of assumptions into the interaction itself.

It's much easier to notice a child playing with matches, but the impacts are no less dangerous. The idea of generations of humans being directly shaped by AI should terrify everyone.

The market is moving fast anyway. Mattel and LEGO are both partnering with OpenAI, and the smart AI toy market is already being talked about as a tens-of-billions category. Which means the likely path here is familiar: ship first, let the social harm show up in public, then scramble for regulation later.

I think AI to support parents / caregivers / teachers is awesome and could lead to richer developmental experiences. And sure, make images with your child if you want or ask the AI questions together. But unsupervised AI to child direct interactions is only slightly worse than giving AIs autonomous weapon systems, a recursive self improvement imperative and an abhorrence for biological life. Well, maybe a lot less worse than that, but still not good.

cheers,

JV

PS: Self paced AI Level Up is on track to launch on March 23rd, and the next iteration of Agentic Coding Essentials will be kicking off on the 30th. I'll have the page for that course updated some time this week.

Code With JV

Each week I share the three most interesting things I found in AI

Read more from Code With JV

Here are three things I found interesting in the world of AI in the last week: 1. The "best AI model" era is over - Every.to / Digital Applied comparison OpenAI launched GPT-5.4 on March 5 to the usual amount of noise. Self-described Claude loyalists are excited. Augment Code made it their default model, calling it "a reliable orchestrator" that uses 18-20% fewer tokens on complex tasks. The headline number: 75% on OSWorld, the first frontier model to beat human experts (72.4%) at autonomous...

Instead of a newsletter this week I thought I'd experiment with a longer form email on an idea that I think is worth sharing. Let me know what you think and if you want more / less of this format. In my head I've been calling this the 'single good idea'. One of my favourite questions to ask people is "what is a new thing you've recently done with AI", often followed up by a "what do you want to be able to do next". It's a pretty quick way to find out where their learning edge is. "I have a...

Here are three things I found interesting in the world of AI in the last week: 1. China's robots did kung fu on the biggest TV show on Earth - CNN China's Spring Festival Gala is the most-watched broadcast in the world. 677 million viewers across platforms. This year's star wasn't a singer or a comedian. It was a dozen Unitree G1 humanoid robots doing drunken boxing, nunchucks, and backflips off trampolines three metres in the air. A larger H2 model appeared in Monkey King armour wielding a...