“As the demand for bandwidth, capacity, and lower latency continues to grow, the migration to faster network speeds is being promoted. Therefore, every year, the adaptability and survivability of large-scale cloud data centers is a test. Currently, 100G optical devices are flooding the market in large numbers, and 400G is expected to usher in next year. Despite this, data traffic continues to increase, and the pressure on data centers will continue unabated.
“
Wu Jian, Technical Director, North Asia, CommScope
As the demand for bandwidth, capacity, and lower latency continues to grow, the migration to faster network speeds is being promoted. Therefore, every year, the adaptability and survivability of large-scale cloud data centers is a test. Currently, 100G optical devices are flooding the market in large numbers, and 400G is expected to usher in next year. Despite this, data traffic continues to increase, and the pressure on data centers will continue unabated.
Seek balance
The capacity of a data center is determined by servers, switches, and connection factors. The three restrict and balance each other, and at the same time promote each other’s development in the direction of faster and lower cost. For many years, exchange technology has been the main driving force. With the introduction of Broadcom’s StrataXGS® Tomahawk® 3, data center managers can now increase switching and routing speeds to 12.8 Tbps and reduce cost per port by 75%. Is the CPU the limiting factor now? actually not. Earlier this year, NVIDIA introduced a new Ampere architecture chip for servers. Facts have proved that the processors used in the game field are very suitable for processing artificial intelligence (AI) and machine learning (ML) required training and inference. So what are the limiting factors?
The network becomes a bottleneck
With the continuous development of switches and servers to support 400G and 800G, the pressure to maintain network balance has shifted to the physical layer. The IEEE 802.3bs standard approved in 2017 paved the way for 200G and 400G Ethernet. The IEEE has just recently completed the bandwidth evaluation work for 800G and above. Considering the time required to develop and adopt the new standard, progress may have been delayed. As the industry is committed to making the leap towards 400G to 800G, and then to 1.2 Tb and even higher bandwidths, cabling and optical OEMs have also been struggling to move forward. Below are some of the development trends we have seen.
The switch is upgrading
First of all, the server configuration and cabling architecture are constantly being upgraded. The aggregation switch is transferred from the top of the rack (TOR) to the middle row (MOR), and is connected to the switch port through the integrated wiring distribution frame. Nowadays, if the data center needs to migrate to higher speeds, simply replace the server jumpers instead of replacing the original longer switch-to-switch backbone link. With this structured wiring design, there is no need to install and manage 192 Active Optical Cables between the switch and the server.
Transceiver dimensions change
The new design of pluggable optical modules provides network designers with more optional tools, including QSFP-DD and OSFP that support 400G. Both shapes have 8x channels, and the optics can provide 8 50G PAM4. When deployed in a 32-port configuration, QSFP-DD and OSFP modules can achieve 12.8 Tbps in a 1RU device. OSFP and QSFP-DD sizes support current 400G optical modules and next-generation 800G optical modules. Using 800G optical fiber, the switch will reach 25.6 Tbps per 1U.
The new 400GBASE standard
There are more connector options to support 400G short-distance MMF (multimode fiber) modules. Under the 400GBASE-SR8 standard, 24-pin MPO connectors (suitable for traditional applications) or single-row 16-pin MPO connectors can be used. Single-line MPO16 is the first choice for early cloud-scale server connections. Another option is 400GBASE-SR4.2, which uses a single-line MPO12 with two-way signaling, which is very suitable for switch-to-switch connections. IEEE802.3 400GbaseSR4.2 is the first IEEE standard that uses two-way signaling on MMF, and it introduces OM5 multimode fiber. OM5 fiber expands multi-wavelength support for applications such as BiDi, enabling network designers to achieve 50% longer transmission distances than OM4.
But are our actions fast enough?
The industry predicts that 800G optics will be needed in the next two years. The 800G pluggable MSA was launched in September 2019 and can help in the development of new applications, including low-cost 8x100G SR multi-mode modules for spans of 60 to 100 meters. Its goal is to provide an early low-cost 800G SR8 solution that enables data centers to support low-cost server applications. The pluggable 800G will support a higher switch base and a lower number of servers in a single rack.
At the same time, the IEEE 802.3db working group is studying low-cost VCSEL solutions for 100G/wavelength and has proven the feasibility of reaching 100 meters on OM4 MMF. If successful, this work can convert server connections from in-rack DACs to MOR/EOR high-density switches. It will provide low-cost optical connections and be able to provide longer-term application support for traditional MMF.
So, where are we now?
The market is changing rapidly, and the speed of future development will be even faster. The good news is that significant progress has been made from standards organizations to the industry, which is expected to help data centers upgrade to 400G and 800G. However, removing technical obstacles only overcomes half of the challenge, and the other half of the challenge lies in seizing the opportunity. Every two to three years is an update cycle. New technologies are also accelerating. It is difficult for operators to accurately determine the appropriate transition time. Once a misjudgment occurs, the cost will be higher.
The industry is constantly evolving. A technology partner like CommScope can help you cope with the ever-changing situation and provide you with decision-making advice that best suits your long-term interests.
The Links: EPM7160STC100-10N PS11036-Y1 INFIGBT