The titans of artificial intelligence have been cutting massive deals among themselves—and investors are getting anxious.
In September, AI-chipmaker Nvidia signed an agreement to invest up to $100 billion in OpenAI, a leading AI research lab, which in turn has agreed to purchase Nvidia’s chips.1 Earlier that month Oracle announced a five-year, $300-billion cloud-computing contract with OpenAI, while Oracle agreed to spend tens of billions on Nvidia chips.2 (Oracle’s stock vaulted 43% on the announcement, adding $234 billion to its market value in one day.)
Critics wonder whether these vertiginous arrangements—structured such that the same handful of companies have become each other’s customers, suppliers and investors—are inflating an unsustainable bubble rather than creating long-term value.
Yet what might look like money looping around Silicon Valley is, in our view, a set of strategic partnerships designed to ensure that the industry has access to enough computational horsepower to meet ravenous future demand for AI, in all its forms.
Unlike giants Google and Amazon, which have piles of cash to finance their own AI infrastructure buildouts, many AI startups looking to scale up fast have little choice but to forge partnerships with investors and suppliers. And it’s not just the modelmakers looking to cut deals; old-school telecom players are doing it, too. In October, Nvidia announced a $1 billion stake in Nokia,3 maker of base-station equipment that transmits and receives radio signals to connect mobile devices to broader telecom networks. (Nokia’s stock jumped 22% on the news.) Working together, the companies aim to develop next generation mobile communication networks—infrastructure designed and built for AI, from the ground up.
In our view, the primary goal of all this dealmaking is to prevent a supply-side bottleneck, not mask weak demand. Tokens—the fundamental unit of output for large language models, and the basis on which AI companies tend to charge for their services—are the clearest metric for AI usage. We find that token throughput continues to accelerate as AI moves from pilot projects to daily workflows in search, productivity, coding and an array of agentic tasks. The pattern resembles a classic S‑curve: Early adoption gives way to compounding usage once the rails are in place.
For example, Google’s Gemini/Vertex AI processed roughly 480 trillion tokens in April 2025, up from about 9.7 trillion a year earlier—a 50x surge—while Azure OpenAI handled over 100 trillion tokens in Q3 2025, with a monthly peak near 50 trillion in June 2025.4 Usage expands as capacity constraints ease, reinforcing demand for multi‑year commitments within the industry. For its part, OpenAI has taken a multi-vendor approach to managing execution risk: Rather than tying its fate to a single supplier, OpenAI has partnered with multiple chip and cloud vendors to fortify its supply chain.
Some skeptics warn that AI fever is reminiscent of the dotcom bubble of 2000. Yet in contrast to the dotcom era, we find that AI-related valuations and capex outlays are reasonably disciplined based on underlying financial performance.
In just one example, Google’s AWS cloud-computing unit reported $13.6 billion of revenue in the second quarter, up 32% from the same period a year ago, as well as a $106 billion backlog.5 Similarly, we believe Nvidia’s performance is reasonably supported by chip sales to AI data centers and overall gross margin strength.
Meanwhile, TSMC—by far the leading supplier of complex chip-manufacturing equipment—has kept a measured hand on the throttle, pacing its output in line with firm customer commitments to preserve pricing power and maintain capital efficiency. In our view, TSMC does not appear in a rush to flood the market with capacity, which could help support the industry’s profitability over the longer term.
While we acknowledge the risks associated with concentration of capital and growing interdependence among major AI players, we believe that underlying market demand, capacity, financing and execution still appear to be reasonably aligned. For now, mind the S curve.