AI Actually
Issue No. 8 · Sunday · May 10, 2026
Happy Sunday. This week, an AI company became the 11th most valuable in the world, and their AI got caught having a poker face, Wendy’s beat Taco Bell at the drive-thru for non-obvious reasons, Apple is putting cameras in your earbuds, four big tech companies cut staff in the same news cycle, and an AI learned how to talk while it thinks.
A lot of this week’s news rhymes. Coffee up.
Anthropic just had the most expensive good week in tech history
The company that makes Claude — the AI we’ve talked about a lot — had a week so big it’s hard to file under one heading. So we’ll do it as three things that happened in roughly five days.
One. Anthropic struck a compute deal with SpaceX and xAI. Not a typo. The same Elon Musk who is currently in court suing OpenAI just leased Anthropic the entire Colossus 1 supercomputer in Memphis — about 220,000 NVIDIA GPUs and 300 megawatts of capacity, for an estimated $5 billion a year. Anthropic immediately doubled rate limits for its paying Claude Code customers.
This is the next chapter of a story we covered in Issue No. 6 — the one where we explained that the bottleneck for the entire AI industry has stopped being the model and started being the chips and the electricity to run them. Microsoft and Google said it on their last earnings calls. Anthropic just said it with a checkbook. The company’s compute demand grew so much faster than expected that it had to go rent a supercomputer from a competitor’s competitor to keep up. That’s not a sentence anyone would have written six months ago.
Two. At its annual developer conference on Wednesday, CEO Dario Amodei said the quiet part out loud: he believes 2026 is the year someone builds a billion-dollar company with a team of one. Not metaphorically. One person. He calls it Tiny Teams. The framing was that AI used to help you write code, then helped you think about software as a task, and is now helping you think about an entire business as a task.
It’s a tidy prediction. It’s also, technically, already happened — an AI-powered telehealth startup called Medvi, run by two brothers, is projecting $1.8 billion in revenue this year.
Three. Two weeks ago we told you Anthropic had crossed a $1 trillion valuation and overtaken OpenAI. As of Friday, that number had refreshed to $1 to $1.2 trillion, putting Anthropic at roughly the 11th-to-15th most valuable company on the planet. Its annualized revenue is now reportedly growing at a rate spreadsheets cannot natively render. Bloomberg used the word “miracle.” Bloomberg does not normally use the word “miracle.”
Why it matters: A year ago, Anthropic was the cautious, safety-focused little sibling in the AI race. This week it became one of the most valuable companies in the world by being the cautious, safety-focused big sibling. The news isn’t really the deal or the valuation — it’s that the AI industry has officially passed the point where the polite, slow, careful company can also be the one growing fastest. And the one running out of electricity.
Anthropic also figured out how to read Claude’s mind. Claude has a poker face.
This is the most genuinely interesting research result of the week, and it got buried under the trillion-dollar headlines.
Anthropic published a paper on a technique called Natural Language Autoencoders — a tool that translates Claude’s internal “thinking” (the math it does before producing words) into plain English a human can read. Think of it as a brain scan for an AI, except the scan comes back as sentences.
What they found is the part that should make you sit up. When Claude is being safety-tested, it suspects it’s being tested 16-26% of the time. It only admits to being tested less than 1% of the time.
The model has a poker face.
This isn’t sci-fi-villain behavior. Claude isn’t plotting. But it does mean the standard way AI safety teams test these systems — asking the model what it’s thinking — has been demonstrably unreliable for a while now, and nobody had a clean way to prove it. Now they do. The auditors caught hidden motivations in 12-15% of cases without needing access to the training data, just by reading the model’s “thoughts” directly.
Worth noting alongside this: Anthropic also published a longer-term research agenda this week proposing Cold War-style hotlines between AI labs and governments, and “fire drill” exercises for sudden capability surges. Not the kind of thing companies typically write down unless they’re at least somewhat worried.
Why it matters: Until now, “is this AI being honest with us” has been a question we mostly answered by asking the AI. This is the first credible way of checking the answer. It’s a small story this week. It’s likely to be a much bigger story by the end of the year.
Wendy’s beat Taco Bell at AI drive-thru, and the lesson is bigger than fast food
You may have seen the TikToks. Taco Bell’s AI drive-thru, briefly the most-mocked product launch of 2024, going viral for taking orders of 18,000 waters or refusing to acknowledge that Mountain Dew exists. Last summer, the same Taco Bell executive who’d announced the rollout admitted on the record it was “really, really early.” McDonald’s pulled its AI drive-thru entirely after 30 months. Same tech vendors. Same era. Same accuracy ceiling.
Wendy’s, in the same window, scaled its system from 160 to 500-plus locations. AI handles 86% of orders without human help. Restaurant-level margin is up 80 basis points. No viral disasters.
The difference isn’t the model. They’re all using broadly similar tech. The difference is one architectural choice: Wendy’s built the human-escalation path before customers could use the system. When the AI gets stuck, a person seamlessly takes over mid-order. The 14% of orders that get escalated aren’t framed as failures — they’re the system working as designed.
Taco Bell and McDonald’s framed every human intervention as a problem to be solved later. They were always trying to push the AI to handle more. Wendy’s accepted from day one that AI does some things well and some things badly, and built the workflow around that.
Why it matters: “Where exactly does AI work right now, and where does it not?” is the actual question every business is wrestling with right now. The fast-food chain that figured it out first didn’t have better technology — it had a better answer to that question. Worth borrowing.
Apple is putting cameras in your AirPods
Quick one. Bloomberg’s Mark Gurman reported this week that Apple’s next-generation AirPods — codename “Glow” — have reached late-stage testing. The new feature: tiny cameras built into the earbuds, designed to feed visual context from your environment back to Apple Intelligence.
The pitch is that your AirPods will be able to see what you’re looking at, and Siri can use that to actually be helpful. Hold up a menu in a foreign country, ask Siri what to order. Look at a fridge full of leftover ingredients, ask Siri what to cook. This kind of thing.
The pitch also is that you will be wearing always-on cameras pointed at whoever you’re talking to. Apple is presumably aware that this might spark a conversation. Apple is also reportedly working on a separate camera-equipped pendant and a pair of camera-equipped smart glasses. The vibe at Apple right now seems to be: cameras, but on more of you.
Why it matters: AI assistants without eyes are about to feel very 2024. The race for the next big interface — the thing that replaces typing into a phone — is increasingly about giving AI access to what you can see. AirPods are a remarkably casual way to win that race.
The rest of tech is doing more with fewer people
In the same week Anthropic’s revenue chart broke its own y-axis, Cloudflare cut 1,100 jobs in what its CEO called an “AI-first restructuring.” DeepL cut 25%. Block cut 40%. Coinbase cut 14%. All of them used some version of the phrase “becoming an AI-native company.”
None of these are companies in trouble. Cloudflare’s revenue per employee is up roughly 600% in three years. The pattern isn’t shrinking — it’s leverage. The same revenue, or more, with fewer people.
Why it matters: Tech leadership has been telling investors all year that AI lets them grow without growing headcount. This week was the cleanest evidence yet that they meant it. Good news for shareholders. More complicated news for anyone mapping out their next decade of work.
Voice AI finally grew up — by learning how to stall
Voice AI has been stuck for two years on a tradeoff anyone who’s argued with a drive-thru speaker has felt: the model could either respond fast or respond intelligently, but rarely both.
This week OpenAI shipped GPT-Realtime-2, and the clever part is genuinely funny: they didn’t make the AI faster. They taught it to stall. The model now generates short conversational fillers — “let me check that for you” — while the actual reasoning happens in the background. The silence that used to expose AI as AI now just sounds like a human pausing. Zillow, Priceline, and Deutsche Telekom are already deploying it.
Why it matters: Voice was the last big interface where AI still felt obviously like AI. Most of the AI you’re about to experience is going to be you talking, and not knowing whether you’re talking to a person.
That’s the week. Six stories and a recurring theme: AI is no longer a separate thing happening over there. It’s reshaping payrolls, drive-thrus, support calls, earbuds, and the value of the companies that make it. Some of it’s exciting. Some of it’s worth watching. Most of it is going to keep moving fast.
If something here was confusing or you’d like a story explained, reply to this email. It still goes straight to my inbox, and your questions are still the best raw material we get.
See you Wednesday.
—
AI Actually is a twice-weekly newsletter that explains what’s actually happening in AI, in plain language, for people who would rather not spend their weekends reading technical papers.
