Are We Still Early On The AI Infrastructure Super Cycle?

Share this article
Spread the word on social media
Nvidia’s recent wave of AI infrastructure investments may be signaling something far larger than simple capital deployment. The company’s aggressive positioning across AI models, hyper-scale compute, optical networking, and data-center infrastructure is increasingly pointing toward one unavoidable conclusion: the future of artificial intelligence will not run on copper alone. Energy efficiency is the canary in the AI Data center coal mine and the copper canary is dead.
According to CNBC, Nvidia has committed more than $40 billion in equity investments during the first five months of 2026, anchored by its massive reported OpenAI position and supplemented by multi-billion-dollar investments into infrastructure-focused companies including Corning and IREN. These are not random bets. They are strategic moves aimed at controlling the full AI stack — from compute to networking to power distribution and optical connectivity.
The Corning investment may be the most important signal of all.
For years, the semiconductor industry focused primarily on GPU horsepower. But AI workloads have evolved so rapidly that networking and data movement are becoming the next major bottleneck. AI clusters are now moving toward 800G and 1.6T interconnect speeds, forcing hyperscalers to confront the physical limitations of copper-based architectures. Heat, latency, signal degradation, and power consumption are all becoming critical problems at scale.
This is where photonics enters the picture.
The AI infrastructure market is increasingly transitioning toward optical interconnects, co-packaged optics (CPO), silicon photonics, and advanced photonic integration. $NVDA itself validated this shift with the launch of Quantum-X and Spectrum-X photonic switching systems, effectively acknowledging that the future AI data center will require optical fabrics capable of moving enormous volumes of data with dramatically lower power consumption.
In simple terms: AI infrastructure is beginning to migrate from electrons to photons. That shift matters because it potentially creates an entirely new layer of strategic winners beyond GPU manufacturers.
Market researchers at LightCounting forecast explosive demand growth for 1.6T optical transceivers over the coming years, with AI networking demand expected to accelerate sharply as hyperscalers race to build next-generation AI factories. Every major AI deployment now depends on solving one core problem: how to move data between accelerators faster, cooler, and more efficiently.
This is where companies focused on photonic integration could become increasingly important.
Among the smaller-cap names attracting investor attention is POET Technologies (NASDAQ: POET), a company developing an Optical Interposer platform designed to integrate lasers, modulators, waveguides, and photonic components into compact semiconductor-based optical engines.
The company’s approach attempts to solve one of the biggest challenges in AI networking: reducing complexity, power draw, packaging costs, and manufacturing friction associated with advanced optical systems.
Importantly, POET is not simply selling a standalone transceiver. The broader thesis centers around enabling scalable photonic integration — effectively acting as a bridge layer between AI compute and optical networking infrastructure.
That positioning could become increasingly relevant as hyperscalers move deeper into co-packaged optics architectures.
Over the past year, $POET has announced a series of partnerships that appear strategically aligned with the broader industry migration toward optical AI infrastructure. These include collaborations with Sivers Semiconductors on external light-source modules for co-packaged optics, Lessengers on 1.6T optical connectivity solutions, and LITEON Technology on AI-focused optical modules targeted toward future production ramps.
The timing is notable.
As Nvidia deploys capital deeper into AI infrastructure and optical-networking ecosystems, investors are beginning to ask where the next bottlenecks emerge. GPUs alone are no longer enough. The market increasingly needs advanced packaging, optical engines, photonic integration, lower-power interconnect architectures, and scalable manufacturing solutions capable of supporting AI factories operating at unprecedented scale. That potentially opens the door for companies like POET.
The bullish case for POET Technologies is not merely tied to revenue today. It is tied to positioning inside a rapidly emerging infrastructure transition that could reshape how AI systems are physically connected over the next decade.
If hyperscalers aggressively adopt co-packaged optics and photonic integration at scale, the companies solving those integration challenges may become highly strategic assets.
Investors should also recognize that strategic acquisition potential exists throughout the photonics sector. Large semiconductor players, networking companies, cloud providers, and optical manufacturers are all competing to secure infrastructure advantages in the AI arms race. As the industry shifts from copper-heavy architectures toward optical fabrics, smaller companies with differentiated IP and manufacturable photonic platforms may increasingly draw attention. The shift from copper to light is not an “if” scenario, it’s a “when.”
Nvidia’s investment activity suggests the company understands that controlling AI compute alone is not enough. The networking layer, optical layer, and integration layer are becoming equally important. And that may be exactly where the next major opportunity in AI infrastructure emerges.