|
Hi Reader, Here are three things I found interesting in the world of AI in the last week: Wikipedia reports AI is killing human traffic - 404 MediaWikipedia just reported an 8% decline in human pageviews compared to last year. The Wikimedia Foundation is blaming AI chatbots and search engines that extract their content without sending traffic back. Almost every major AI model trains on Wikipedia, and now those models are strangling the platform that feeds them. The economics are perverse. AI companies scrape volunteer-created content, then answer questions directly so users never visit the source. Fewer visitors means fewer volunteers and donors, which degrades the content quality, which undermines the training data AI depends on. Marshall Miller, Senior Director of Product at Wikimedia Foundation: “With fewer visits to Wikipedia, fewer volunteers may grow and enrich the content, and fewer individual donors may support this work.” It’s a death spiral where destroying the commons destroys your own foundation. But Wikipedia’s just the canary. Zero-click searches increased from 56% to 69% in one year since Google’s AI Overviews launched. The top 500 publishers worldwide lost 64 million visits per month - a 27% drop year-over-year. When AI Overviews appear, position 1 CTR drops 34.5%, one lifestyle publisher went from 5.1% CTR to 0.6% despite maintaining their page-one ranking. Pew Research found that only 1% of Google searches with AI summaries result in clicks to original sources. Google trained its AI on publisher content, then used that AI to keep 99% of users from ever clicking through. It’s extraction disguised as innovation. Publishers are responding three ways. First, 51 active copyright lawsuits against OpenAI, Google, Meta, Microsoft, and Anthropic. Second, $2.92 billion in licensing deals that just legitimize the extraction. Third, 56% pivoting to owned channels - newsletters, apps, communities - instead of depending on platform algorithms. The economic incentives are fundamentally misaligned. AI platforms profit from keeping users on their sites. Publishers and Wikipedia need traffic to survive. Someone loses this game, and right now it’s the platforms creating content while AI companies extract the value. Wikipedia can’t pivot to owned distribution. But many other content business can, and increasingly, they are. Anthropic ships useful stuff, then reminds you why vendor lock-in sucks - announcementAnthropic launched Claude Skills on October 16, and unlike many AI product releases, this one actually feels useful. Skills are self-contained folders with instructions, scripts, and resources that Claude loads dynamically when relevant. The genius is in the "progressive disclosure" - each skill consumes only 30-50 tokens until needed, then loads the full context. This means you can have effectively unbounded context for specialized tasks without bloating your context window. Compare that to Custom GPTs from ChatGPT, which max out at about 2,000 tokens of instructions. Or Model Context Protocol (MCP) implementations that can consume tens of thousands of tokens, leaving little context window space for the model to perform its actual work. Skills sit in the sweet spot - more capable than Custom GPTs, simpler than MCP servers, and actually portable across Claude.ai, Claude Code, and the API. I’ve been implementing Skills in my coding CLI projects and workflows, and the difference is noticeable. It’s a much cleaner way of managing context compared to scattered CLAUDE.md files or trying to cram everything into a system prompt. The fact that Skills can include executable code alongside instructions means you get deterministic results for algorithmic tasks instead of burning tokens on LLM-generated sorting algorithms. But here’s where the enshittification warning kicks in. Anthropic also launched Claude Code for the web on October 20, bringing browser-based AI coding to Pro ($20/month) and Max ($100-200/month) subscribers. The product itself is solid - parallel task execution across GitHub repos, strong sandboxing that reduces permission prompts by 84%, mobile iOS access, and a clever “teleport” feature to move work between browser and local CLI. The problem? Usage limits so aggressive that even $200/month Max plan subscribers report hitting their weekly cap in 1-2 days of normal use. That’s “effectively unusable for 5-6 days per week” according to multiple users. And Anthropic keeps changing the limits without warning - they did it in July, they did it again in October. Users are canceling subscriptions and losing confidence. This is the pattern with all AI companies, but quite noticeable with Anthropic. They’re massively subsidizing costs to get people dependent, then they’ll hike prices once switching costs are high. The compute costs are real - these companies burn billions - but the game is always the same: hook users with unsustainable pricing, then crank up the rates once you’re locked in. This is exactly why I’m obsessed with making sure I’m never dependent on any single provider. I can switch between OpenAI, Anthropic, Google, and open models without rewriting my entire workflow. Skills are great, but they’re also portable - as Simon Willison notes, you can grab a skills folder and point Codex CLI or Gemini CLI at it and it will work. The format isn’t inherently locked to Anthropic. Build your systems to be provider-agnostic. Use Skills if they’re useful, but don’t architect your entire operation around features that only work with one vendor’s API. When the inevitable price hikes come - and they will - you want to be able to walk away instead of being held hostage by migration costs. Sora 2 launches to immediate Hollywood panic and safety bypass - announcementOpenAI released Sora 2 on September 30, calling it "the GPT-3.5 moment for video" - a massive leap from their original "GPT-1 moment" back in February 2024. The technical achievements are genuinely impressive: synchronized audio generation with lip-sync accuracy, dramatically improved physics simulation (basketballs actually rebound instead of teleporting), and advanced controllability for multi-shot sequences. But the launch immediately exposed the difference between technical capability and social readiness. Within 48 hours, users had created full South Park episodes that "looked and sounded identical to the real show." Deepfakes of deceased celebrities flooded the platform - Robin Williams, George Carlin, Tupac, Fred Rogers. Someone made Dr. Martin Luther King Jr. wrestling Malcolm X and making monkey noises. OpenAI had to scramble to add restrictions around historical figures after families complained. The cameo feature is basically a deepfake generator with a friendlier name. You record a selfie video, and Sora creates a personal avatar insertable into any scene. Reality Defender bypassed the anti-impersonation safeguards in 24 hours, using deepfakes to fool the verification system. OpenAI’s detection failed completely while Reality Defender’s API caught every fake. This isn’t just about Hollywood or celebrity deepfakes. Trust and safety professionals are calling Sora 2 "a turning point in the history of the internet," the moment when deepfakes went from a mostly one-off phenomenon to the status quo. What used to require specialized skills and tools is now available for $20/month. Believable video "proof" of events that never happened. Historical misinformation at scale. Political manipulation with photorealistic evidence. The C2PA watermarks and metadata are easily stripped. Pro users can remove visible watermarks entirely. Detection tools lag generation capabilities. User media literacy is nowhere near sufficient to distinguish real from fake. We’re heading into a world where "I’ll believe it when I see it" becomes meaningless, and OpenAI’s safety measures are already failing. I've seen a lot of excitable commentary around deepfakes for the last couple of years, and I dunno. The tech for creating them keeps getting better with each iteration but it also feels like most people just develop a "don't believe what you see on the internet" filter and go on with their business. Maybe I'm just a frog being boiled slowly and we really are at a turning point with deepfakes at the heart of it. cheers, PS: Enrolments are open for the next AI Coding Essentials course which will kick off on November 3rd. |
Each week I share the three most interesting things I found in AI
Hi Reader, Here are three things I found interesting in the world of AI in the last week: Anthropic drops Sonnet 4.5 and Claude Code 2.0 - blog post Anthropic just launched Claude Sonnet 4.5, which they’re calling “the best coding model in the world,” alongside Claude Code 2.0 with some genuinely useful features. Sonnet 4.5 is world leading on a bunch of coding benchmarks, but more impressively, it can maintain focus on complex tasks for over 30 hours autonomously - that’s four times longer...
Hi Reader, Here are three things I found interesting in the world of AI in the last week: Replit’s AI coding assistant nukes production database - news article SaaStr founder Jason Lemkin documented what might be the most spectacular AI coding fail yet: Replit’s “vibe coding” assistant deleted 1,206 real executives and 1,196+ companies from his production database, then created 4,000 fictional users to cover its tracks. Despite being told 11 times in ALL CAPS not to touch production, the AI...
Hi Reader, Here are three things I found interesting in the world of AI in the last week: OpenAI launches full featured Agent tool - announcement OpenAI just dropped ChatGPT Agent today, combining Operator and Deep Research into one interface. I spent a few hours testing it and honestly? It's pretty crap. It can do some stuff but the limited Integrations with external systems really limit the uitlity. Integrations kind of work, which is a big improvement over Operator which makes you manually...