100G data centers are maturing, no matter from which angle, the technologies are getting more and more perfect. The 100G data centers can already carry most of the things we want, but at most it only loads the memory and spiritual food of human life—we do see some computations, but such computations are no different from the supercomputing of the previous era. If we don't understand the goal of data center applications, we can't design a data center that matches technologies and applications.
Optical interconnection technology is moving from 100G to 200G and 400G. Perhaps we can say that 100G optical interconnection has just reached its robust performance in 2018. So we can basically assert that the 100G data centers built before 2018 are all dangerous buildings or houses built on sandy land. We must be aware of the risks.
The current large data centers basically follow the 100G CWDM4 structure of the last era, and use AOC and DAC the same time. Today, we need to retell a topic. The proposal is to divide the data center into two parts: transmission structure and interconnection structure. The WDM active architecture is used in the transport layer, while the parallel PSM structure (including parallel optics and parallel electrics) is firmly used in the interconnection layer. We do see that the architecture similar to Facebook is very concise, but it is also high-cost. Therefore, it is necessary for us to demonstrate the relationship between the economic cost and the economic structure. What we need to do is to find a sort relationship based on a fixed principle that will guide us to make the best choice in difficult choices.
In order for the 100G CWDM4 structure to be widely used, data centers have paid a high price. The main reason is that in the past era, the stability and consistency of optical chips were not good, and the optical interconnection of data centers was in a period of no standard. Fortunately, at least Gigalight's product design standards are in line with expectations and applications. Now the industry knows that reliability, product life and maintenance costs are related to each other. Current conclusions basically support that CWDM4 conforms to the mainstream characteristics of 100G data center in terms of technology implementation, such as saving optical fibers, switching from maintenance of multiple products to maintenance of one product. However, from a different point of view, this concise structure is also problematic. There are three reasons shown as following.
Two years ago, I published an article about the choice of PSM or WDM in data center. In this article, I think the choice of PSM is more realistic, but it has attracted some criticism. The reality is also contrary to my view—the data center is moving towards the structure of CWDM4 covering PSM4. However, just as humans on the road, it is very common that the right vision is replaced by the wrong path. A child who grew up in a poor environment surely have a totally different world outlook and money outlook from a child of rich origin. At the 2018 OFC exhibition, the topic of 400G was very popular, but it was very immature. According to people's understanding of 400G at the beginning of 2018, it is basically to skip PAM4 technology and directly use 100G Single Lambda DSP technology to implant 400G transceiver, that is to say, to skip 200G directly to an unimaginable 400G. This leap is not a generation, but two generations. Now, we already know that this desire is obviously too optimistic.
From NRZ to PAM4 and then to DSP, is it a gradual leap, or a leap that can reach the ultimate goal in one step? We still need to discuss these technologies from the perspective of transmission or interconnection. I think the first two are used for interconnection architecture, while the DSP technology is basically only used in the field of optical transmission.
There is a fundamental difference between DSP's work and PAM4 modulation. Whether the DSP can succeed in the client side module is still unknown—I believe that it is impossible to use the DSP to deal with the recovered signal distortion without any processing of the link optical layer. Of course, just as many of my views have been gradually corrected by the progress of the times, trying to argue, explore and make mistakes is the only way for the progress of human technologies and markets. Aside from the unpredictability of technology implementation, we have four analytical architectures covering 200G and 400G networks.
At present, we have not paid attention to a popular 400G network structure—400G DR4&FR4. Basically, we believe that this architecture is extremely difficult to achieve. This architecture is a beautiful illusion of people transcending technical difficulties, and it is not necessarily economic from a practical point of view.
We understand that people including ourselves have been looking for a data center that is concise, reconfigurable and cost-effective. But people usually prioritize things in terms of simplicity, reconfigurability, cost and technology, which goes against the law of things. Contrary to the law of things, additional expenses are needed. There is nothing that humans can't do, and sometimes they are so capricious that they spoil costs. We believe that from a professional point of view, we should put the cost first, followed by technology, then conciseness, and finally reconfigurability.
As the open optical network device explorer, GIGALIGHT integrates the design, manufacture and sales of active & passive optical devices and subsystems. The main products are optical transceivers, silicon photonics transceivers, liquid cooling transceivers, optical passive components, AOC & DAC, coherent optical modules and open DCI BOX subsystems. GIGALIGHT is a hardware solution provider of innovatively designed high-speed optical interconnection that focuses on data center, 5G carrier network, metro WDM transmission, UHD broadcast video and other application fields.