The most consequential development in Alphabet's AI infrastructure strategy is the emergence of a deeply interconnected tripartite alliance among Google, Broadcom, and Anthropic. This arrangement has fundamentally transformed Google's Tensor Processing Unit program from a proprietary internal capability into a burgeoning third-party revenue stream with strategic implications extending well beyond any single quarter's results.
Massive compute capacity commitments—ranging from 3.5 to 5 gigawatts—combined with next-generation custom silicon (TPU 8t and TPU 8i) and purpose-built networking infrastructure (Virgo) are positioning Google's TPU ecosystem as a credible structural alternative to NVIDIA's GPU dominance. However, NVIDIA CEO Jensen Huang's assertion that Anthropic single-handedly accounts for 100% of Google TPU demand growth introduces a concentration risk that demands rigorous examination.
The Organizational Architecture: Google, Broadcom, and Anthropic
Partnership Structure
Broadcom serves as Google's key design and manufacturing partner for TPU silicon, providing custom ASIC design, intellectual property, and rack-level systems integration. This relationship extends across multiple TPU generations, including the newly announced TPU 8i inference accelerator, and encompasses accelerator chips and advanced Ethernet networking. One source indicates that half of Broadcom's revenues derive from Google's TPU business, underscoring how structurally critical this partnership is for both companies.
The Anthropic Expansion
On April 6–7, 2026, Anthropic formally announced a major expansion of its partnerships with both Google and Broadcom to scale development of its foundation models, agents, and enterprise applications. Anthropic described this as its most significant compute commitment to date.
The deal provides Anthropic with approximately 3.5 gigawatts of computing capacity using Google's TPUs, with Broadcom acting as a key intermediary supplying the custom silicon. Some reports reference a larger 5-gigawatt commitment, which may reflect a broader aggregate commitment or a separate dimension of the agreement. The compute capacity is expected to come online starting in 2027, with the vast majority sited in the United States.
Multi-Cloud Strategy
Anthropic's compute architecture now spans a three-provider model encompassing Google Cloud, Broadcom, and AWS—assembled in less than three weeks—reflecting a multi-cloud, multi-silicon strategy designed to secure massive compute allocation. This rapid assembly suggests Anthropic's leadership recognizes the structural imperative of avoiding single-provider dependency.
Anthropic as Dominant TPU Customer: A Concentration Risk Analysis
Jensen Huang's Claims
NVIDIA CEO Jensen Huang stated in multiple venues that Anthropic is responsible for 100% of Google TPU growth and 100% of AWS Trainium growth. Huang further asserted that without Anthropic, Google's TPU and AWS Trainium programs would have no meaningful production volume. While this assertion must be viewed in its competitive context, the consistency of this message across multiple reports gives it structural weight.
Scale of Anthropic's Commitment
The scale of Anthropic's commitment to TPUs is staggering:
- A supply-chain-derived projection forecasts Google's total TPU volume will reach 4.3 million units in 2026
- An earlier Semianalysis estimate from November 2025 pegged total TPUv7 deployment at 1 million units, with 400,000 hosted by Anthropic on its own infrastructure and 600,000 rented from Google Cloud
- Other reports indicate Anthropic may be purchasing close to 1,000,000 TPUv7 units directly
- The Alphabet–Anthropic deal includes compute expansion of up to one million TPUs
- Prior to its separate Amazon agreement, Anthropic held approximately 3.5 GW of Google TPU allocations
These figures paint a picture of Anthropic as the anchor tenant—and potentially the only external tenant at scale—for Google's third-party TPU business.
Strategic Lock-In
The $40 billion Anthropic investment guarantee further reinforces this dependency, ensuring that future Anthropic models will run natively on Google's 8th-generation TPUs. Alphabet's sales of TPUs to large AI labs such as Anthropic are explicitly identified as a growing revenue stream, and deploying third-party TPUs represents a horizontal scaling of Alphabet's silicon business beyond its own data centers.
Organizational Tension
A structural tension emerges here. Google must allocate limited TPU supply between its own AI services and external customers. Demis Hassabis has noted that the company is favoring supply for its more elite internal teams—a dynamic that could constrain external revenue growth precisely when it might otherwise accelerate.
TPU Hardware: Architectural Design for the Agentic Era
Bifurcated Architecture
Google has bifurcated its 8th-generation TPU architecture into two distinct silicon offerings:
- TPU 8t: Optimized for training
- TPU 8i: Optimized for inference and reasoning workloads
This workload-specific split is explicitly framed as being "for the agentic era," aligning hardware strategy toward autonomous AI agents.
TPU 8i Specifications
The TPU 8i inference accelerator features:
- 288 GB of high-bandwidth memory with 384 MB of on-chip SRAM
- 19.2 terabytes per second of memory bandwidth
- Doubled interconnect bandwidth to 19.2 Tb/s for Mixture-of-Experts models
- 56% reduction in network diameter—7 hops versus 16 hops for a 3D torus architecture
- Five times more efficient than the prior generation
TPU 8t and Superpod Specifications
- TPU 8t delivers up to 4x bandwidth per accelerator versus the previous generation
- A single TPU superpod houses 9,600 chips delivering 121 exaflops of compute performance
- 2 PB of shared memory and 10 TB/s throughput
- A combined Google TPU cluster achieves 1.7K ExaFlops of AI compute performance
Networking Infrastructure
Google's Virgo Network connects up to 134,000 TPU chips in a single network fabric, and Google Cloud AI Hypercomputer supports over 1 million TPUs pooled across multiple sites. Google has referenced gigawatt-level TPU clusters, implying power and infrastructure at the scale of an entire power plant dedicated solely to AI compute.
Software and Integration
Both TPU 8t and TPU 8i run on Google's Axion Arm-based CPU host, support native JAX, MaxText, PyTorch, SGLang, and vLLM, and offer bare-metal access. Google Cloud's full-stack AI infrastructure spans custom silicon, networking (Virgo), storage (Managed Lustre), data (Agentic Data Cloud), security (Agentic Defense), and applications (Workspace, Commerce).
Competitive Positioning: TPU Versus NVIDIA
Cost and Efficiency Claims
Multiple claims position Google's TPU ecosystem as a cost- and efficiency-competitive alternative to NVIDIA:
- TPUs hold a 52% efficiency advantage over NVIDIA's Blackwell architecture
- At standard 9,000-chip rack configurations, Google TPUs are approximately 2x cheaper than NVIDIA GPUs
- The Ironwood platform achieved a 3.7x improvement in Compute Carbon Intensity compared to TPU v5p
Nuanced Competitive Picture
However, the competitive picture is nuanced. One account notes that Anthropic appeared to value NVIDIA hardware most but was steered toward Google TPUs and AWS Trainium due to capacity constraints, suggesting that NVIDIA remains the preferred architecture when available. Google itself maintains a multi-architecture strategy combining TPU, GPU, and CPU for AI workloads, implicitly acknowledging that no single silicon solution dominates all use cases.
Google's Competitive Advantages
- Google is the only cloud provider with proprietary top-tier AI semiconductor hardware
- Its decade-long TPU development history—with early users including Boston Dynamics—represents a meaningful moat
- Google's AI "tripod" consists of silicon (TPUs), scale (data centers for large models and Android for smaller models), and "smarts" (in-house models and AI talent)
Strategic Implications for Alphabet
Transformation of the TPU Business
Google has transformed its custom silicon program from a purely internal infrastructure advantage into a commercial product serving external AI labs. This is structurally analogous to Amazon's evolution from running AWS for itself to selling cloud services externally—a shift with profound revenue implications that took a decade to fully materialize.
Revenue and Growth Implications
- Alphabet's sales of TPUs to large AI labs are a growing revenue stream
- The horizontal scaling of its silicon business beyond its own data centers opens a new growth vector
- The $40 billion investment in Anthropic and the multi-gigawatt compute commitments lock in a long-term, capital-intensive relationship
- The guarantee that future Anthropic models will run natively on 8th-generation TPUs creates a potential virtuous cycle
Operational Advantages
Google's advanced orchestration and scheduling enabling high utilization rates on its TPU fleet further enhances the platform's economic viability.
Concentration Risk: The Structural Vulnerability
The Core Risk
The most significant risk emerging from these claims is the extraordinary concentration in Anthropic as a customer. Jensen Huang's repeated assertion that Anthropic accounts for 100% of TPU demand growth—even if hyperbolic—raises a fundamental organizational question: if Anthropic were to shift its compute strategy, whether due to its Amazon relationship, internal silicon development, or simply securing more NVIDIA allocation, what would happen to Google's TPU revenue trajectory?
Mutual Lock-In
Anthropic's dependence on Google TPUs is mirrored by Google's dependence on Anthropic's demand, creating a mutual lock-in that is simultaneously a competitive advantage and a vulnerability. The fact that Anthropic has already assembled a three-provider compute architecture spanning Google Cloud, Broadcom, and AWS suggests active diversification, which could dilute Google's share of Anthropic's compute wallet over time.
Broadcom's Central Role
Broadcom's position as the indispensable manufacturing and design partner for Google's TPU program—with half its revenue coming from this business—makes it a critical dependency in Alphabet's AI supply chain. Broadcom's engagements span custom TPUs, accelerator chips, and advanced Ethernet networking, and it maintains agreements with both Google and Anthropic for AI accelerator development.
Any disruption to Broadcom's production capacity would directly impact Google's ability to deliver on its Anthropic commitments. Conversely, Broadcom's success with Google's TPU program provides a validated blueprint for its custom ASIC business with other hyperscalers like Meta.
The Scale of the Infrastructure Bet
Magnitude of Commitment
The compute capacity figures are breathtaking by any historical standard. At 3.5 to 5 gigawatts, Google is building what amounts to power-plant-scale AI infrastructure. The 4.3 million TPU unit forecast for 2026 implies a massive manufacturing and deployment effort.
Capital and Sustainability Implications
Energy costs and sustainability questions at the 5-gigawatt scale are already being raised, and these will only intensify as the infrastructure comes online starting in 2027. Investors must consider whether Alphabet's capital expenditure requirements to support this buildout are fully reflected in current financial models.
Technical Differentiation and the Agentic Thesis
Google's split of its TPU architecture into training (TPU 8t) and inference/reasoning (TPU 8i) variants, explicitly framed for "the agentic era," signals a bet that autonomous AI agents will drive the next wave of compute demand. The 5x efficiency gain in the inference TPU and the dramatic reduction in network diameter are architectural responses to the unique demands of reasoning workloads—latency-sensitive, memory-bandwidth-intensive, and requiring tight inter-chip coordination.
If the agentic thesis proves correct, Google's workload-specific silicon could provide a meaningful cost advantage over general-purpose GPU alternatives, much as specialized manufacturing equipment outperformed general-purpose machinery during the industrial era.
Key Takeaways
1. Anthropic as TPU Anchor Tenant
Anthropic is Google's TPU anchor tenant—and potentially its only external customer at scale. Jensen Huang's repeated assertion that Anthropic drives 100% of TPU demand growth demands serious investor attention. While self-serving in origin, the claim is corroborated by multiple independent reports showing Anthropic absorbing hundreds of thousands to millions of TPU units. Alphabet's TPU-as-a-service revenue stream is a promising new growth vector, but it carries extreme customer concentration risk that warrants a discount on forward revenue projections.
2. The Agentic Era Architecture
The 8th-generation TPU split (8t for training, 8i for inference) represents a sophisticated architectural bet on the "agentic era." With 5x efficiency gains, 56% lower network diameter, and integration into a full-stack AI platform spanning silicon, networking, storage, and applications, Google's TPU ecosystem is technically competitive with—and potentially superior to—NVIDIA's offerings on a total-cost-of-ownership basis for inference-heavy workloads. The 52% efficiency advantage over Blackwell and the 2x cost advantage at rack scale are claims that deserve independent verification but, if accurate, position Google's TPU as a formidable competitor.
3. Broadcom's Strategic Dependency
Broadcom's role as the exclusive manufacturing and design partner for Google's TPU program creates both strategic dependency and shared upside. With half of Broadcom's revenues tied to this partnership, the two companies are tightly interwoven for the foreseeable future. Investors should monitor Broadcom's production capacity, any signs of supply constraints, and the potential for Google to diversify its manufacturing partners—any of which could materially impact Alphabet's AI infrastructure timeline and cost structure.
4. Gigawatt-Scale Capital Implications
The gigawatt-scale compute commitments (3.5–5 GW) imply massive capital expenditure that may not yet be fully priced into financial models. With operations starting in 2027 and demand described as explosive, Alphabet is making a long-duration infrastructure bet of staggering proportions. The energy, sustainability, and capital allocation implications at this scale are material and deserve rigorous scrutiny in Alphabet's quarterly disclosures. Investors should weigh the potential for outsized cloud revenue from Anthropic's training and inference needs against the upfront capital burden and execution risk inherent in building power-plant-scale AI infrastructure.