With the advent of 800G and NVIDIAs new Twin (dual) MPO- 8/12 interface, this AI cabling guide illustrates how NVIDIA optical transceiver modules and Corning's EDGE8® solution contribute to the evolution of AI and Machine Learning innovations.
Chapter one of our NVIDIA 800G guide offers a comprehensive exploration into the use of Point-to-Point and Structured Cabling applications. It explains how these cabling strategies are employed not only to interconnect servers and Leaf switches, but also to facilitate switch-to-switch connectivity, applicable for connections from Leaf-to-Spine or Spine-to-Core switches. This chapter also provides insight into the different Corning components and part numbers that can be utilized in various network designs.
Chapter two delves into the cabling strategy for an NVIDIA DGX SuperPOD. Using the NVIDIA DGX H100 as a reference, this section explains how similar cabling components and infrastructure can be adapted for other DGX models. It also demystifies how the configuration of scalable units (SUs) determines the structure of a SuperPOD and the total number of GPUs. The chapter underscores the pivotal role of the POD or cluster size in determining the required number of InfiniBand Leaf, Spine and Core Switches, and consequently, the number of cables or connections needed.
This type of neural network can require over 10 times more optical fiber in the same space than legacy server racks. Corning has been preparing for this hyperscale data center trend for years. We’ve invented new optical solutions that help data center operators quickly get these dense, next-generation GPU clusters up and running. We’re deploying these solutions today with the world's leading hyperscale data center operators — ushering in the age of Generative AI.
Download our AI Architecture Cabling Guide today.