NVIDIA 800G InfiniBand and Ethernet Connectivity AI Cabling Guide | Corning

NVIDIA 800G InfiniBand and Ethernet Connectivity
AI Cabling Guide

NVIDIA 800G InfiniBand and Ethernet Connectivity
AI Cabling Guide

Our NVIDIA 800G InfiniBand and Ethernet Connectivity AI Cabling Guide provides an in-depth examination of various fiber optic connectivity options available to interface with 200G, 400G, and 800G transceivers. This document specifically elaborates on how 400G NDR InfiniBand Quantum-2 and 400GbE Spectrum-4 Ethernet switches are integrated into the fiber optic cabling system.

The world of AI and Generative AI will demand tailored connectivity solutions, future proofed and supplied at a speed never before seen in the Data Center industry.

 

With the advent of 800G and NVIDIAs new Twin (dual) MPO- 8/12 interface, this AI cabling guide illustrates how NVIDIA optical transceiver modules and Corning's EDGE8® solution contribute to the evolution of AI and Machine Learning innovations.

Chapter one of our NVIDIA 800g guide offers a comprehensive exploration into the use of Point-to-Point and Structured Cabling applications. It explains how these cabling strategies are employed not only to interconnect servers and Leaf switches, but also to facilitate switch-to-switch connectivity, applicable for connections from Leaf to Spine or Spine to Core switches. This chapter also provides insight into the different Corning components and part numbers that can be utilized in various network designs.

Chapter two delves into the cabling strategy for an NVIDIA DGX SuperPOD. Using the NVIDIA DGX H100 as a reference, this section explains how similar cabling components and infrastructure can be adapted for other DGX models. It also demystifies how the configuration of scalable units (SUs) determines the structure of a SuperPOD and the total number of GPUs. The chapter underscores the pivotal role of the POD or cluster size in determining the required number of InfiniBand Leaf, Spine and Core Switches, and consequently, the number of cables or connections needed.

This type of neural network can require over 10 times more optical fiber in the same space than legacy server racks. Corning has been preparing for this hyperscale data center trend for years. We’ve invented new optical solutions that help data center operators quickly get these dense, next-generation GPU clusters up and running. We’re deploying these solutions today with the world's leading hyperscale data center operators — ushering in the age of Generative AI.

Download our AI Architecture Cabling Guide today.

Fill out the form below and let us connect you to our HSDC Innovation Team.

Time is a resource you can not afford to waste. Corning provides free, specialized design and is integrated with the worlds largest AI manufacturers.

Thank you for your submission!

A Corning representative will be contacting you shortly about your inquiry.