Despite a surge in demand for AI networks, revenue from data center switch sales declined for the first time in more than three years in the first quarter of 2024. According to Sameh Boujelbene, group vice president at Dell'Oro, backlog normalization, inventory digestion, and spend optimization at cloud service providers and enterprise customers were the culprits. But it appears that some vendors have managed to escape the downturn.
Boujelbene said, "Arista was able to outperform the market because of its diverse cloud service provider base and increasing penetration among key enterprise customers, as well as new customer acquisitions, which started to contribute to Arista's revenue this quarter."
Huawei also managed to grow, and not just because of its home country.
"All of their growth has been driven by regions outside of China, particularly EMEA," Boujelbene said of Huawei. "Most of the large projects they've won have been with governments."
Network realities
200G, 400G and 800G switches combined accounted for about a quarter of the total revenue, with the remainder coming from switches with speeds of 100G and below.
While adoption of 400G and 800G switches is expected to accelerate this year and next, Boujelbene explains that this is partly due to the different switch speeds used in data centers.
For front-end networks connecting general-purpose servers, speeds of 100G and below are optimal. Part of the reason for this is that these networks are "typically compute-related, not network-related, which means compute is the bottleneck, not the network." Network utilization is typically 50% or less. That's why bandwidth requirements are lower.
So, for example, 100G switches can be used for core aggregation or leaf and ridge use cases, while 25G and 10G are used for server access. That said, some hyperscale organizations are starting to use 200G and 400G in their front-end networks.
In general, switch speeds above 400G are now used primarily to accelerate back-end networks of servers (i.e., those used for AI). In these back-end use cases, the bottleneck is the network, not the compute. As a result, faster speeds are needed to maximize the use of expensive GPUs.
"Ideally, the network needs to run at 100 percent so that very expensive gas pedals (e.g., gpu) don't sit idle waiting for the network to respond," Boujelbene said.
Looking ahead to the rest of 2024, Boujelbene said Dell'Oro expects "spending on front-end networks will be weak, but spending on AI back-end networks will be huge."
She concluded, "We'll see what share of the back-end network Ethernet can capture."