§ News
By AI Blog Editor
Apr 23, 2026 · 11 min read
Anthropic and Amazon, Round Three — $5 billion in, $100 billion out
Amazon wrote Anthropic a $5 billion cheque. Anthropic wrote Amazon a $100 billion promise. What's in between is five gigawatts, a million Trainium chips, and the clearest picture yet of how frontier-AI infrastructure gets paid for.

On April 20, 2026, Amazon wrote Anthropic a $5 billion cheque. Anthropic wrote Amazon a $100 billion promise. Somewhere in the difference is five gigawatts of AI compute, more than a million Trainium chips, and the clearest picture yet of how frontier-model infrastructure actually gets paid for.
Depending on how you count, this is either the biggest cloud commitment in history or a very structured way for Amazon to buy itself twenty dollars of AWS revenue for every one dollar it puts in. The press releases prefer the former framing.
What is actually being promised
Both companies' own announcements confirm the headline figures:
- $5 billion in fresh Amazon equity into Anthropic today, on top of Amazon's existing $8 billion stake. Up to another $20 billion is tied to commercial milestones.
- More than $100 billion committed by Anthropic to AWS technologies over the next decade.
- Up to 5 gigawatts of new compute capacity, with "nearly 1 gigawatt total" of Trainium2 and Trainium3 online by the end of 2026.
- Over one million Trainium2 chips already in use to train and serve Claude, anchored by Project Rainier — the AWS/Anthropic cluster of around half a million Trainium2s that went into production earlier this cycle.
- Coverage extends through Trainium3, Trainium4, and "future generations" — a polite way of saying Anthropic has prepaid for silicon it hasn't seen yet.
Buried in Anthropic's own announcement is the number that probably matters most to anyone watching the AI economy from the outside: the company's run-rate revenue has now surpassed $30 billion, up from approximately $9 billion at the end of 2025. Tripling in four months is not normal. Spending $10 billion a year on AWS looks very different if that revenue line holds than if it stalls in Q3.
The circularity problem
TechCrunch filed the announcement under "another circular AI deal," and the shape is hard to miss. Amazon invests $5 billion. Anthropic commits to spend $100 billion at AWS. Viewed from a high enough altitude it starts to look less like an investment and more like a very long discount coupon.
This is the same choreography Microsoft and OpenAI have been running for three years: hyperscaler puts money in, lab spends most of it back on the hyperscaler's cloud, revenue flows up both income statements, and the market reads the whole round-trip as validation of both. Amazon now runs the same play on both sides of the table — the company reportedly put $50 billion into OpenAI in February 2026, and is now matching it with today's Anthropic expansion. Amazon is hedged against either frontier lab winning. Anthropic is hedged against… Amazon. That is not the same shape of hedge.
Dario Amodei's quote in the joint release leans on the demand side: "Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace." Andy Jassy's counterweight is about supply: "Our custom AI silicon offers high performance at significantly lower cost for customers, which is why it's in such hot demand." Between them, both sides of the trade have been pre-written for the next earnings call.
The Trainium bet, expanded
The part of the deal that is not circular is the silicon. Amazon's Trainium line is the most serious non-Nvidia attempt at AI training hardware in production. Anthropic is the biggest customer willing to run real frontier training on it. Everyone else with a model that matters is still on H100s and B200s.
By committing to Trainium3 this year and Trainium4 when it arrives, Anthropic is underwriting AWS's chip roadmap. In return it gets guaranteed supply, priority on new generations, and — if Jassy's "significantly lower cost" claim survives contact with a real training run — a cost-per-token that Microsoft's OpenAI deal does not obviously match. Nvidia does not appear in either company's press release, which is its own kind of statement.
Five gigawatts is a useful reference point. AWS's current global data-center fleet runs on the order of 30 to 50 GW depending on whose estimate you trust. Dedicating up to five of those gigawatts to a single customer is large enough to shape Amazon's site-selection, transmission, and cooling-water negotiations for years. This is why the deal is stretched across a decade: you cannot stand up five gigawatts of generative-AI compute in a hurry, no matter how confident the press release sounds.
The political backdrop nobody wants to reprint
None of this is happening in a neutral news environment. In late February, Defense Secretary Pete Hegseth designated Anthropic a "supply chain risk to national security." A federal judge blocked the designation in March; the Trump administration is appealing. The lab's largest investor committing up to $25 billion in new capital while that appeal is pending is a political bet as much as a commercial one.
Amazon has its own file in the same cabinet. It is the cloud host for Project Glasswing — the defensive-cybersecurity program Anthropic used to justify keeping Mythos Preview off the public API — an investor in both Anthropic and OpenAI, and a vendor to every major US federal agency. The regulators trying to draw lines around "safe" AI compute are staring at one company that sits on every side of the diagram.
What is actually new here
Two things are genuinely new in this announcement, and they are worth separating from the capex theatre.
- Anthropic's revenue line is public now. "$30 billion run-rate, up from $9 billion in five months" is the kind of figure you put in a press release when you want investors and regulators both to take the company seriously. Whether the number stays is a separate question — run-rate is a moving target — but stating it sets a floor under every valuation conversation from here on, including the reported VC offers north of $800 billion.
- Custom silicon has a frontier customer. Google's TPUs have had that for years with DeepMind. Nvidia has almost everyone else. Until this announcement, Trainium was a credible threat with no flagship reference. It now has one, and the reference comes with a decade-long budget attached.
What to watch
Three concrete things over the next twelve months will tell you whether this was a real infrastructure commitment or a very expensive way of restating an existing relationship.
- Whether Project Rainier expands visibly. About half a million Trainium2 chips are in the ground today. If the 5 GW is real, new sites should be publicly announced and permitted through 2026.
- What model gets trained on Trainium3 first. If Claude Opus 5 — the successor to 4.7 and the presumed public descendant of Mythos — ships as a Trainium3-trained model, the silicon thesis is vindicated. If it quietly lands on Nvidia H200s, the deal was mostly financial.
- Whether the $20 billion milestone tranche ever triggers. Commercial milestones in deals of this shape tend to be defined narrowly and paid out selectively. Watch for the 10-Q disclosure, not the press release.
The thing to sit with, regardless of how the specifics land, is that frontier-AI is now being financed the way semiconductors and airlines have been for decades — through take-or-pay commitments stretched over timelines long enough that the counterparty can plan capital spending against them. The round-number drama of a $100 billion pledge is the cover story. The actual deal is ten years of predictable AWS revenue, priced in chips that don't exist yet.
* * *
Thanks for reading. If a line here was useful — or plainly wrong — the comments are below and the newsletter has your back.
Elsewhere in this issue
3 more- 01
News
A trillion-dollar Anthropic — the number that lives only on Forge
Apr 27, 2026
- 02
News
Decoupled DiLoCo — Google teaches frontier training to survive a bad fibre and a dead chip
Apr 26, 2026
- 03
News
GPT-5.5 "Spud" — twice the price, split benchmarks, and a polite request to start your prompts over
Apr 25, 2026
Letters
Arguments, corrections, questions. Anonymous comments allowed; be kind, be specific.