Anthropic PBC, the AI developer behind the Claude model family, announced a major expansion of its compute infrastructure through strengthened partnerships with Google LLC and Broadcom Inc. The multi‑party agreement will deliver multiple gigawatts of next-generation AI computing capacity beginning in 2027, a scale critical for training and serving frontier artificial intelligence systems.
Under the expanded arrangement, Anthropic will secure access to approximately 3.5 gigawatts of next-generation tensor processing unit (TPU)-based compute capacity, leveraging Google’s custom AI accelerators and Broadcom’s role as a producer and supply partner for future TPU generations and associated networking infrastructure.
“This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure,” said Krishna Rao, Chief Financial Officer of Anthropic. “We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development.”
Anthropic said the new capacity commitment, its largest to date, will support both training and serving of its most advanced AI offerings. The company did not disclose pricing or specific financial terms of the computer arrangement.
In a separate but related transaction, Broadcom and Google also sealed a long-term chip supply agreement extending through 2031, under which Broadcom will co-develop and deliver future generations of Google’s custom AI chips and associated hardware for Google’s AI infrastructure.
The compute deals come amid surging demand for Anthropic’s Claude models, which the company disclosed has driven its run-rate revenue past $30 billion, more than triple its level at the end of 2025. This rapid growth has been cited by Anthropic as a driver of its need to scale underlying compute resources.
Market reaction was positive: Broadcom’s shares climbed sharply on the news, reflecting investor optimism about the company’s role in AI infrastructure amid fierce competition in custom silicon and data-centre compute.
Industry analysts say the trio’s expanded cooperation underscores a broader compute arms race among AI developers and cloud providers, as the cost, scale, and optimisation of AI training and inference hardware become critical competitive edges in the next phase of generative AI deployment.
Read:
- Challenges of AI Adoption in Africa: Power, Data, and Policy
- Kenya And EU Deepen Digital Ties: Focus on AI and Infrastructure
Senior Reporter/Editor
Bio: Ugochukwu is a freelance journalist and Editor at AIbase.ng, with a strong professional focus on investigative reporting. He holds a degree in Mass Communication and brings extensive experience in news gathering, reporting, and editorial writing. With over a decade of active engagement across diverse news outlets, he contributes in-depth analytical, practical, and expository articles exploring artificial intelligence and its real-world impact. His seasoned newsroom experience and well-established information networks provide AIbase.ng with credible, timely, and high-quality coverage of emerging AI developments.