Latest board
All AI Data Center Interconnects Will Be Optical Within 5 Years
Hyperscalers & Cloud Semiconductor Engineering US

All AI Data Center Interconnects Will Be Optical Within 5 Years

The issue is no longer demand alone; it is whether the surrounding infrastructure is ready.

Editor's Brief
  1. Semiconductor Engineering reported a development that could affect hyperscalers & cloud planning.
  2. The practical issue is whether demand can be converted into reliable capacity on schedule.
  3. Watch execution details, customer commitments, and any bottlenecks around power, cooling, silicon, or permitting.

Semiconductor Engineering reported: InP and SiPho join CMOS as critical technologies. Lasers, CPO and OCS will be everywhere (indium phosphide, silicon photonics, co-packaged optics, optical circuit switch). I spent several days at OFC (Optical Fiber Communications Conference) 2026 in LA. The crowds were huge and the enthusiasm intense. Long-time attendees noted the shift from telecom to data center AI in just a few years. Nvidia GTC 2026 took place simultaneously in San Jose. OFC and GTC are entangled because data center AI needs optical interconnect to keep compute fed. Optical interconnect enabled the internet with transoceanic and transcontinental high-bandwidth optical fiber connections. Optical interconnect has since taken over scale-out links in the data center. All the overhead racks with bright yellow cables are fiber optics. We are on the verge of several more transitions that will result in all high-bandwidth data interconnects becoming optical everywhere in the data center in the next five years: Inference is driving AI now. ChatGPT started the revolution. AI is evolving, becoming more useful and exponentially more compute intensive. Anthropic and OpenAI are both at $25 billion-ish annual revenue run rates, up from zero in a few years. Most are still low on the AI learning curve so growth will accelerate as they come up that curve. Fig. 1: Inference is evolving to require exponentially more compute.

Read narrowly, this is one more item in the daily flow of infrastructure news. Read against the buildout cycle, it points to a more practical question for cloud infrastructure: can the operating system around compute keep up with demand? The constraint is not just chip supply. Advanced compute depends on packaging, memory, networking, power delivery, and the ability to land systems inside facilities that can actually run them at high utilization.

That makes the second-order detail more important than the announcement language. Cooling design standardization may determine who can actually monetize higher-density deployments on schedule.

That matters for buyers because the useful capacity is the installed, cooled, powered cluster, not the purchase order. It also matters for suppliers because component shortages can shift bargaining power quickly across the stack.

The financial question is whether this development improves pricing power, locks in scarce capacity, or exposes execution risk that the market may still be discounting, the operating question is procurement timing, facility readiness, network design, and the likelihood that adjacent constraints will slow realized deployment, and the customer question is whether this changes build sequencing, partner dependence, or the economics of scaling regions and clusters over the next few quarters.

The market tends to price the demand story first and the delivery work later. That can hide the hardest parts of the buildout: grid queues, procurement windows, permitting, vendor capacity, and the coordination needed to turn a plan into a running site.

For a board focused on AI infrastructure, the item matters because it clarifies where leverage may sit. Sometimes that leverage belongs to chip suppliers or cloud platforms. In other cases it moves to utilities, landlords, financing partners, equipment vendors, or regulators that control the pace of deployment.

The next signal to watch is the next disclosures on customer commitments, infrastructure readiness, and any evidence that power, cooling, silicon supply, or permitting becomes the real gating factor. The next test is whether delivery schedules, memory availability, and deployment readiness move together or start to diverge.

Source

Read the original report

#gpu