Anthropic partners with Broadcom and Google for AI chips

0
130
Anthropic

In one of the largest infrastructure commitments in AI history, Anthropic has announced a multi-year partnership with Google and Broadcom to secure a massive infusion of next-generation computing power.

The deal centers on the development and supply of custom Tensor Processing Units (TPUs) and networking components, ensuring Anthropic has the hardware “runway” needed to train and deploy its increasingly complex Claude models through 2031.


1. The “Gigawatt” Scale

The sheer scale of the agreement highlights the transition from “data center” thinking to “energy grid” thinking in AI development.

  • 3.5 Gigawatts of Power: Broadcom and Google will provide Anthropic with access to approximately 3.5 GW of TPU-based compute capacity. For context, this is enough energy to power over 2.6 million homes.
  • Timeline: The first waves of this new capacity are expected to come online starting in 2027.
  • Domestic Focus: Aligning with Anthropic’s $50 billion commitment to U.S. infrastructure, the majority of this new compute will be sited in data centers across the United States.

2. Broadcom’s Strategic Role

While Google provides the cloud environment, Broadcom acts as the primary architect of the silicon.

  • Custom TPU Development: Broadcom has entered a long-term agreement with Google to develop and supply future generations of TPUs. These chips are designed specifically for AI inference and training, offering a high-efficiency alternative to Nvidia’s H100/B200 GPUs.
  • Networking Infrastructure: Beyond the chips, Broadcom is supplying the specialized networking gear and “AI racks” required to connect thousands of TPUs into a single, cohesive supercluster.
  • Long-term Roadmap: The partnership between Broadcom and Google is formalised through 2031, ensuring a decade-long pipeline for custom AI hardware.

3. Anthropic’s Triple-Threat Hardware Strategy

Despite this massive commitment to Google’s hardware, Anthropic maintains a “chip-agnostic” approach to maintain resilience and performance.

Hardware PlatformPrimary PurposePartner
AWS TrainiumPrimary training & BedrockAmazon (Project Rainier)
Google TPUsSpecialized inference & trainingGoogle / Broadcom
Nvidia GPUsGeneral purpose & high-flexibilityNvidia / Microsoft Azure

4. Financial Context: The $30 Billion Run Rate

The announcement included a rare glimpse into Anthropic’s explosive financial growth, likely shared to reassure investors of its ability to pay for such massive infrastructure.

  • Revenue Surge: Anthropic’s annualized revenue run rate has crossed $30 billion in April 2026—a 233% increase from the $9 billion reported at the end of 2025.
  • Enterprise Adoption: The company now has over 1,000 business customers spending more than $1 million annually, doubling its high-value customer count in just two months.
  • Funding Context: This growth follows a Series G funding round in February 2026, which valued the company at an estimated $60–$70 billion.

5. Why the Deal Matters

For Google and Broadcom, the deal is a “Supply Assurance” win that locks in high-volume, long-term orders for their custom silicon.

  • The “Antitrust” Angle: Analysts note that by using Google’s TPUs and AWS’s Trainium in tandem, Anthropic is positioning itself as a “multi-cloud” leader that isn’t entirely dependent on any single Big Tech ecosystem.
  • The “Nvidia Hedge”: The move signals a broader industry trend toward Custom Silicon, as companies seek to reduce reliance on the expensive and supply-constrained Nvidia GPUs.

“We are making our most significant compute commitment to date to keep pace with our unprecedented growth,” said Krishna Rao, Anthropic’s Chief Financial Officer. “We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development.”

Advertisement

LEAVE A REPLY

Please enter your comment!
Please enter your name here