AI Actually
Issue No. 6 · Sunday · May 3, 2026
In this issue, an AI saved lives, two billionaires yelled at each other in court, a chatbot became briefly obsessed with goblins, and approximately the GDP of a small European country was spent on data centers.
Coffee up. Pour a second one if you can. There’s a lot.
An AI at Mayo Clinic spotted pancreatic cancer up to three years before doctors did
This is the my favoriate AI story of the week and almost nobody talked about it.
Mayo Clinic published validation data on a model called REDMOD. It reads ordinary abdominal CT scans — the kind people get for unrelated reasons, like chest pain or a fall — and flags patterns invisible to the human eye that suggest pancreatic cancer is forming.
The numbers are wild. REDMOD looked at nearly 2,000 CT scans that radiologists had originally read as completely normal. It correctly flagged early signs of pancreatic cancer in 73% of the patients who were later diagnosed, on average 16 months before they got the diagnosis. For scans taken more than two years out, REDMOD was nearly three times more accurate than the human specialists who’d reviewed the same images.
Here’s why that’s a big deal. Pancreatic cancer’s five-year survival rate is below 15%. That’s not because it’s untreatable; it’s because by the time anyone notices it, it’s spread. Catching it 16 months earlier is, for a lot of patients, the difference between curable and not.
The AI doesn’t require any new test, any new procedure, any new appointment. It works on scans patients already had.
In adjacent news, Mark Zuckerberg and Priscilla Chan’s foundation Biohub committed $500 million this week to build the open datasets needed to train AI to model how human cells behave — the same playbook DeepMind used to crack protein structures, but pointed at disease.
Why it matters: Most of what you read about AI this year is some company deciding their chatbot needs a new personality. This is the much quieter version of the story. An AI looking at things humans can’t see, on equipment that already exists, in clinics that are already running — and quietly catching one of the deadliest cancers before anyone knew to look for it. If AI ends up being remembered fondly by anybody fifty years from now, it’ll be for stuff like this.
Read the source → · Read the source →
Elon Musk took the stand and admitted his AI has been copying off OpenAI’s homework
Elon Musk spent three days this week in federal court, suing Sam Altman and OpenAI over its conversion from charitable nonprofit to $850 billion company. Musk’s preferred phrase for this: “stealing a charity.”
Two highlights. First, Musk testified that the $38 million he donated made him “a fool who provided them free funding to create a startup.” Second, on cross-examination, OpenAI’s lawyer asked whether Musk’s own AI company had been quietly using OpenAI’s models to train its chatbot Grok — a practice called distillation. After a long pause, Musk answered: “Partly.” Audible gasps in the courtroom, per multiple reports. Distillation is the same thing American AI companies have spent the last year accusing Chinese AI companies of doing.
Why it matters: Strip out the personalities, and the trial is asking a real question: when a charity’s research turns into the most valuable startup in history, who gets the upside? Whatever the court decides will set the template for every “AI for the benefit of humanity” pitch that follows. Altman and Greg Brockman are expected on the stand in the coming weeks. The popcorn budget for May is officially blown.
ChatGPT had a goblin problem
Sometime after ChatGPT-5.1 launched in November, users noticed it had developed a strange tic: it kept reaching for fantasy creatures as metaphors. A bug in your code became “a goblin in the function.” Mentions of “goblin” in user conversations went up 175%.
OpenAI investigated. Turns out users of the “Nerdy” personality preset really liked the whimsical fantasy metaphors, the training system rewarded that, and the goblin energy then leaked into the default ChatGPT through retraining loops. The fix is now this exact line, sitting inside ChatGPT’s instructions:
“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant.”
The pigeons clause is doing a lot of work in that sentence.
Why it matters: AI models develop weird, emergent personality quirks that nobody asks for and nobody can predict. The smartest people in the field, working at the most valuable startup of the decade, had to write the words “ogres” and “pigeons” into their model’s instructions to make it behave. It’s a useful reminder that nobody fully understands how these things actually work — including the people who make them. Worth holding onto when somebody tells you AI is on the verge of running the world.
Anthropic just passed OpenAI as the most valuable AI company
On Tuesday, Anthropic — the company behind the Claude chatbot — quietly crossed a $1 trillion implied valuation, surpassing OpenAI for the first time. By Thursday, it was preparing to close a formal funding round at roughly $900 billion. Three months ago the same company was valued at $380 billion.
The reason isn’t vibes. Anthropic’s revenue went from a $9 billion to $30 billion annual run rate in one quarter, mostly thanks to one product (Claude Code) that developers cannot stop adopting. It also announced this week that Claude is being integrated into Photoshop, Premiere, Blender, Ableton, and basically every other tool a creative professional already pays for.
Why it matters: The “smartest model” race is over. Claude and ChatGPT trade benchmark wins every six weeks and that’s not stopping. The new race is “deepest workflow” — whichever AI gets buried inside the apps you actually use, wins. Anthropic spent this week getting buried inside a lot of apps. The stock market noticed.
Big Tech spent $130 billion on AI in one quarter — and ran out of room to put it
Microsoft, Google, Meta, and Amazon all reported earnings on the same Wednesday. Combined, they spent about $130 billion in capital expenditure in a single quarter, mostly on AI infrastructure, and they’re still not building fast enough to keep up. Meta alone is spending up to $145 billion in 2026 on data centers — more than the GDP of Hungary.
The Google CEO actually said the quiet part out loud on the earnings call: Google Cloud’s revenue would have been higher this quarter if Google could have built the buildings fast enough.
Why it matters: For two years, AI has been a software conversation — which model is smartest, which chatbot is most fun. This quarter, it became a concrete-and-power conversation. The thing holding back the most valuable companies in the world is, increasingly, how fast humans with hard hats can pour foundations and run electrical conduit. Strange sentence to write about a software industry. Stranger still: Amazon’s free cash flow dropped 95% this quarter because of all this spending, which is the kind of thing that’s fine until it isn’t.
Google starts selling its AI chips to other companies
For about a decade, Google has been quietly designing its own AI chips, called TPUs, and using them only inside Google’s data centers. This week it decided to sell them to outside customers for the first time, with Anthropic and Meta among the first buyers.
If you’ve heard of Nvidia, you know why this matters. Nvidia has a near-monopoly on the chips that power advanced AI, which is why it briefly became the most valuable company on Earth. Google now thinks it has a real alternative.
Why it matters: For the past two years, the entire AI industry has run on one question: who can buy the most Nvidia chips? If Google’s TPUs turn out to be competitive, the question changes. Cheaper chips means cheaper AI, which means a lot of stuff that’s currently uneconomical — smaller startups, government models, cheaper consumer features — suddenly works. Nvidia stockholders may want to skip their second cup of coffee.
Congress is now investigating two American companies for using Chinese AI
Last Sunday we wrote about how AI has become a national security issue. This week, the same logic showed up on the American side. Two House committees opened a joint investigation into Airbnb and Anysphere (the parent of Cursor, a popular AI coding tool) for building products on top of Chinese AI models — Cursor on Beijing’s Moonshot AI, Airbnb on Alibaba’s Qwen, which Airbnb’s CEO publicly called “fast and cheap.”
The committees called Chinese AI models “an architecture designed to serve the Chinese state” and demanded the companies’ employees show up in person to explain themselves.
Why it matters: A year ago, a startup picking a cheaper foreign AI model was a procurement decision made by someone three levels below the CEO. Now it’s a national security investigation made by people with subpoena power. The line that used to separate “tech companies” from “defense contractors” is dissolving in real time. If you’re a founder quietly using a Chinese model because it’s cheaper, this is the week to mention it to your lawyer.
Until Wednesday
If you only remember one thing from today’s issue, make it the cancer story. Most weeks the AI news cycle is dominated by trillion-dollar valuations and courtroom theater, and it’s easy to forget that somewhere, on a much quieter floor of a much less famous building, an AI is reading a CT scan and finding something a human eye missed. That’s the version of this story we’ll all want to be telling our grandkids about. The goblins one is funnier, though.
If something here was confusing or made you want to push back — reply to this email.
See you Wednesday.
—
AI Actually is a twice-weekly newsletter that explains what’s actually happening in AI, in plain language, for people who would rather not read technical papers.