Trends in 400G Optics for the Data Center
Data Center Connections are Driving Optics Volume
Due to the ongoing large increase in bandwidth demand, data center connections are expected to move from 25G/100G to 100G/400G.
- Within the Data Center Racks: 10GE is still being deployed, 25GE is starting to be deployed in volume, and 100GE or 50GE will follow.
- Between Data Center Racks: 40GE is still being deployed, 100GE is starting to be deployed in volume, and 400GE will follow at large cloud service providers.
- Long Spans, DCI and WAN: 10G DWDM/tunable is still being deployed, 100G/200G coherent starting to be deployed, and 400G coherent will follow—then 600G or 800G.
Forecasted Data Center Ethernet Port Shipments
Forecasted 400GE Shipments by Market Segment
Mainstream 1RU Ethernet Switch Roadmap
3.2Tb/s switches based on 100G QSFP28 modules are being deployed in cloud data centers today.
Given the multiple switching ICs expected to be available, the market is likely to be fragmented in the future.
Large growth in bandwidth demand is pushing the industry to work on technologies and standards to support future 12.8T switches.
400G and Next-Gen 100G Ethernet Optical Standardization
|IEEE Standards||Interface||Link Distance||Media Type||Optical Technology|
|400GBASE-SR16||100m(OM4)||32f Parallel MMF||16x25G NRZ Parallel (VCSEL)|
|400GBASE-DR4||500m||8f Parallel SMF||4x100G PAM4 Parallel (SiP)|
|400GBASE-FR8||2km||2f Duplex SMF||8x50G PAM4 LAN-WDM (DML)|
|400GBASE-LR8||10km||2f Duplex SMF||8x50G PAM4 LAN-WDM (DML)|
|100GBASE-SR2||100m(OM4)||4f Parallel MMF||2x50G PAM4 850nm (VCSEL)|
|100GBASE-DR||500m||2f Duplex SMF||100G PAM4 1310nm (EML)|
|400GBASE-SR8||100m(OM4)||16f Parallel MMF||8x50G PAM4 850nm (VCSEL)|
|400GBASE-SR4.2||100m(OM4)||8f Parallel MMF||8x50G PAM4 BiDi 850/910nm (VCSEL)|
|100G Lambda MSA
|400G-FR4||2km||2f Duplex SMF||4x100G PAM4 CWDM (EML)|
|100G-FR||2km||2f Duplex SMF||100G PAM4 1310nm (EML)|
|100G-LR||10km||2f Duplex SMF||100G PAM4 1310nm (EML)|
- The VCSEL technology is to be used for distance within 100m.
- The Silicon Photonics technology is to be used for distance within 1km.
- The DML/EML technology is to be used for distance within 40km.
- The SWDM technology is to enable 400GE over duplex MMF in the future.
Note: The 400GBASE-SR16 standard is not expected to be deployed.
400GE Optical Transceiver Form Factor MSAs
CFP8 is the 1st-generation 400GE module form factor, to be used in core routers and DWDM transport client interfaces. The module dimensions of CFP8 are slightly smaller than CFP2. CFP8 supports either CDAUI-16 (16x25G NRZ) or CDAUI-8 (8x50G PAM4) electrical I/O.
QSFP-DD and OSFP modules are being developed as 2nd-generation 400GE, for high port-density data center switches. They enable 12.8Tb/s in 1RU via 32x 400GE ports and support CDAUI-8 (8x50G PAM4) electrical I/O. Only the QSFP-DD host is backwards compatible with QSFP28.
400G Ethernet Is Taking Shape in the Cloud Data Center
- Metro DCI Links (< 80km): 100G/200G coherent modules are being deployed. 400GE LR8/ER8 and ZR Coherent (400ZR) modules are on the roadmap.
- Tier 2 Switch to Tier 1 Switch Links: 100GE CWDM4/PSM4 modules are being deployed. 400GE FR8/FR4/DR4 modules are on the roadmap.
- Tier 1 Switch to TOR Switch Links: 100GE SR4 modules and AOC cables are being deployed. 400GE DR4/SR4.2/SR8 modules and AOC cables are on the roadmap.
- TOR Switch to Server Links: 25GE SR modules and AOC/DAC(3m) cables are being deployed. 50GE/100GE SR modules and AOC cables are on the roadmap.
General Trends in Data Center Optical Interconnects
- Continuous increase in bandwidth density: On-board optics vs. pluggable optics discussion
- Increasing adoption of optics in Server-to-TOR Switch links
- Low-latency optics for certain niche cognitive-computing applications
- Maturity of key technologies: High-speed VCSELs and Silicon Photonics
- Arrival of coherent optics for data center interconnects