As the artificial-intelligence spotlight shifts its gaze from model training to agentic applications, we believe commodity-component suppliers could get a welcome boost.

For the past few years, the semiconductor narrative has largely revolved around one theme: training the large language models at the core of the AI revolution. While capital, talent and investor attention clustered around the most powerful chips, legacy semiconductor and hardware components were left to muddle through a prolonged downcycle.

Now, we believe that bifurcation has begun to unwind: As AI workloads shift from model training to “agentic” applications that can infer what users need and automatically perform those tasks, the action is broadening beyond leading-edge chips to less-glamorous and investment-starved parts of the IT stack.

Even small constraints within these hardware supply chains have the potential to ripple into system-level bottlenecks: Bringing new capacity online can take 2-3 years and daunting capital expenditures tend to keep new entrants at bay; meanwhile, modern AI infrastructure continues to raise the technical bar for many of these commodity components.

This combination of chronic undersupply and rising performance demands has been pushing hardware prices higher. Given the market’s shifting supply-demand dynamics, we believe this recent recovery in prices is less of a cyclical bounce and more of a longer-term, structural reset.

Greasing the Agentic Gears

Thus far the AI story has been primarily dominated by specialized chips known as "accelerators.” These parallel-processing workhouses can perform billions of simultaneous calculations—the raw computational firepower required to train large language AI models. Accelerators live in large data centers filled with clusters of GPUs.

Yet as AI expands beyond model training into myriad agentic applications, the workloads demand a fundamentally different mix of hardware. Today, millions of users are simultaneously asking AI agents to search the web, compare products, book appointments and execute transactions. Meeting that demand around the clock and at unpredictable scale requires vast fleets of general-purpose components—processors, memory chips and logic components—working in concert with those cutting-edge accelerators.

To keep up, those traditional components now require faster processing speeds, lower latency, greater energy efficiency and tighter manufacturing tolerances. We believe that gap between yesterday’s specifications and today's requirements—exacerbated by the lack of overall hardware supply—is creating a host of potential investment opportunities.

Optical Networking

At the turn of the century, internet scaffolding was built to handle data flowing at 10 to 100 gigabits per second (Gbps), a turtle’s pace by today’s standards. Back then, data centers ran on copper cables, the same electrical wiring technology that underpinned telephone networks. Even as data volumes ballooned, cloud infrastructure generally remained a sleepy corner of the broader tech sector. For example, while copper cables gave way to fiber-optic ones, those quickly became standardized, with manufacturers competing essentially on price.

We believe AI has reinvigorated demand for traditional hardware within the guts of the AI ecosystem.

For example, consider the optical transceivers that convert electrical signals into optical signals. To train large AI models, the industry has had to develop transceivers that can handle 400 to 800 Gbps, many orders of magnitude beyond their legacy counterparts. Now we believe agentic AI is pushing those limits even further, putting the technology on a clear trajectory toward 1.6 to 3.2 terabits per second (Tbps). And as demand for greater data speeds increases exponentially, the cables lacing together all those accelerators, servers and storage systems must now be built to tighter tolerances to preserve signal integrity.

Meanwhile, years of underinvestment in infrastructure hardware have begun to push up prices for various key components: For example, in January 2026, the cost of a single-mode fiber optic cable rose 75% year-on-year, hitting its highest level in seven years.1

Multilayer Ceramic Capacitors

Multilayer ceramic capacitors (MLCCs) might be among the most unglamorous components in electronics. MLCCs protect circuits from electrical spikes and smooth out signal fluctuations, and for most of their commercial history, the core technology changed only incrementally, and suppliers competed mainly on price.

Then came AI. Because the latest GPUs run hotter, faster and at higher power densities, they demand capacitors with dramatically greater energy-storage capacity that can be packed into a much smaller physical space. We believe this is a genuine manufacturing feat that only a handful of suppliers can deliver at scale—a potential bottleneck now being revealed in rising spot prices for MLCCs used in AI, industrial and automotive applications across Taiwan, Korea and China.2

Expanding the AI Opportunity Set

In light of these trends, we believe thematic tech investors should be looking beyond the hyper-scalers and high-performance chipmakers and instead focus more closely on the potential beneficiaries of the broader AI rollout.

Even high-flying Nvidia appears to realize that historically boring parts of the IT stack are no longer an afterthought. In March 2026, CEO Jensen Huang forecasted that Nvidia's annual revenue from AI-chips would hit $1 trillion in 2027, up from an estimated $500 billion for 2025 and 2026 combined.3 We believe reaching that number implies massive agentic AI adoption and therefore rising demand for the traditional hardware that took a backseat during the model-training phase.

Putting money where its mouth is, Nvidia invested $2 billion each in Lumentum and Coherent, two makers of optical-networking laser chips, essential components of modern data centers. To us, this $4 billion commitment signals that reliable supply of core optical components may no longer be something that the industry—and tech investors—can take for granted.4

As the AI buildout expands, we believe the spotlight may increasingly fall on the more prosaic parts of the IT stack where current capacity constraints could bolster pricing across the wider electronics supply chain. In this environment, we think investors shouldn’t be surprised if “legacy” hardware suppliers are rewarded less like late-cycle leftovers and more like scarce inputs.