The Cloud Data Center is the infrastructure of Cloud Computing. The cloud IT infrastructure is mainly composed of switches, servers and fiber optic cables, optical transceivers and active optical cables that interconnect them all. For the optical interconnection of Next-Generation 200G/400G Cloud Data Center, Gigalight offers a perfect solution. Let's get to know it.
The network architecture of the Next-Generation Cloud Data Center is generally pided into three layers, namely, Spine Core, Edge Core, and ToR (Top of Rack).
The transmission distance between the ToR access switches and the server NICs is generally less than 5m. For this case, the 200G solution is 25G or 50G DAC/AOC interconnects and the 400G solution is 50G or 100G DAC/AOC interconnects. The DAC (direct-attached copper cable) has the advantages of lower cost, power consumption and heat dissipation, while the AOC (active optical cable) has the advantages of lower weight, longer transmission distance, and easier installation and maintenance.
The transmission distance between the ToR access switches and the Edge Core switches is generally less than 100m. Optical transceiver and MTP/MPO cable can be used here. However, AOC is used more in fact. For this case, the 200G solution is 200G QSFP-DD AOC and 200G QSFP56 AOC. The 200G QSFP-DD AOC uses NRZ modulation, however, the future tends to the 200G QSFP56 AOC using PAM4 modulation. And the 400G solution is 400G QSFP-DD AOC that uses the 8x 50G PAM4 modulation technology.
The transmission distance between the Edge Core switches and the Spine Core switches is generally less than 2km. The 200G FR4 and 400G FR8 optical transceivers are used here and interconnected with duplex LC cables. For this case, the 200G solution is the 200G QSFP56 FR4 2km optical transceiver and the 200G QSFP-DD PSM8 2km optical transceiver, while the 400G solution is the 400G QSFP-DD FR8 2km optical transceiver.
From the Spine Core to the Core Router, which belongs to the DCI Metro Interconnect, the transmission distance is generally less than 100km. The CFP/CFP2 coherent optical transceivers are used here, interconnected with duplex LC cables. For this case, the 200G solution is the 200G CFP2-DCO optical transceiver.
Gigalight has a complete line of cloud data center products. And the optical interconnection products used in the Next-Generation 200G/400G Cloud Data Center have been launched or are being developed. The following products list is for reference.
Optical Transceivers
Product Name
|
Max. Data Rate
|
Form Factor
|
Wavelength
|
Max. Distance
|
400G QSFP-DD SR8
|
425Gbps
|
QSFP56-DD
|
850nm
|
100m
|
400G QSFP-DD PSM8
|
425Gbps
|
QSFP56-DD
|
1310nm
|
2km
|
400G QSFP-DD LR8
|
425Gbps
|
QSFP56-DD
|
LAN-WDM
|
10km
|
200G QSFP-DD LR4
|
212.5Gbps
|
QSFP56-DD
|
LAN-WDM
|
10km
|
200G QSFP-DD ER4
|
212.5Gbps
|
QSFP56-DD
|
LAN-WDM
|
40km
|
200G QSFP56 SR4
|
212.5Gbps
|
QSFP56
|
850nm
|
100m
|
200G QSFP56 DR4
|
212.5Gbps
|
QSFP56
|
1310nm
|
500m
|
200G QSFP56 FR4
|
212.5Gbps
|
QSFP56
|
CWDM4
|
2km
|
200G QSFP56 uFR4
|
212.5Gbps
|
QSFP56
|
CWDM4
|
2km
|
200G QSFP56 LR4
|
212.5Gbps
|
QSFP56
|
LAN-WDM
|
10km
|
200G QSFP56 ER4
|
212.5Gbps
|
QSFP56
|
LAN-WDM
|
40km
|
200G QSFP-DD SR8
|
206.25Gbps
|
QSFP28-DD
|
850nm
|
100m
|
200G QSFP-DD PSM IR8
|
206.25Gbps
|
QSFP28-DD
|
1310nm
|
2km
|
200G QSFP-DD PSM LR8
|
206.25Gbps
|
QSFP28-DD
|
1310nm
|
10km
|
200G QSFP-DD CWDM8
|
206.25Gbps
|
QSFP28-DD
|
CWDM8
|
2km
|
200G QSFP-DD LR8
|
206.25Gbps
|
QSFP28-DD
|
LAN-WDM
|
10km
|
Active Optical Cables