The US hyperscalers―Amazon, Google, Meta, and Microsoft―are set to deploy over 5 million AI training-capable accelerators in 2024.
“Demand for accelerators has been growing at a breakneck pace as the hyperscalers race to deploy infrastructure for the training and inference of large language models,” said Baron Fung, Senior Research Director at Dell’Oro Group.
“In addition to commercially available GPUs, the US hyperscalers are also increasing their deployment of AI infrastructure with custom accelerators. As large language models continue to grow in size, driving the need for larger compute clusters, hyperscalers are accelerating their adoption of custom accelerators. Often co-developed with chipmakers like Broadcom and Marvell, these custom solutions aim to boost performance efficiency, lower costs, and reduce dependency on NVIDIA GPUs.”
The Server and Storage Systems Component market is forecast to increase by over 100 percent in 2024. Accelerators, followed by memory and storage drives drove the incremental growth.
NVIDIA dominated component revenues, leading all vendors, with Samsung and SK Hynix trailing behind. NVIDIA captured nearly half of the total reported revenues, while hyperscalers deploying custom solutions are rapidly gaining ground.
Smart NIC and DPU revenues nearly doubled in 3Q 2024, driven by strong deployment of network adapters in AI clusters.
Growth is expected to moderate in 2025, though still double-digit, with a potential slowdown in general-purpose server components early in the year due to inventory adjustments.