Servers shipped with specialized co-processors for artificial intelligence (AI) and machine learning (ML) will make up over 10 percent of global server shipments by 2022, according to the Data Center Server Equipment Market Tracker from IHS Markit. Fueled by the adoption of Internet of Things (IoT) connected devices, network functions virtualization (NFV) and new software technologies like AI and ML, demand for data center server computation will accelerate, leaving the CPU struggling to keep up with demand. As a result, general-purpose programmable parallel compute co-processors will go mainstream to offer the CPU a much-needed helping hand.
Hyperscale cloud service providers lead in adoption of servers with specialized processing units for AI and ML. These providers are increasingly deploying servers with general-purpose programmable parallel compute co-processors, spurred by investments in AI for speech recognition, search engine optimization and cybersecurity. The latest IHS Markit Data Center Compute Strategies and Leadership Survey of over 150 North American enterprises confirms that businesses plan to also ramp investment in servers with co-processors, showing a preference for servers with general-purpose graphics processing units (GPUs) and field-programmable gate arrays (FPGAs).
Composable compute with PCI Express (PCIe) switches, which allow pools of compute, storage, networking and co-processors within a rack to be grouped together to form a virtual compute node, creates better economics for service providers and enterprises alike, enabling workload sharing and easy redistribution of co-processor pools.
Features that make it possible to virtualize co-processers continue to be added to multi-tenant server software, increasing the utilization of servers shipped with a co-processor and making them more attractive.
Generational improvements in co-processor performance have also made servers with co-processors more attractive than traditional CPU-only servers. In addition, co-processor options are multiplying, enabling diverse server architectures that allow customers to better match server to workload.
- Servers shipped with general-purpose graphics processing units (GPGPUs) are forecast to comprise 7 percent of global server units in 2022. (Forecast does not include GPUs for video.)
- Servers shipped with FPGAs, including PCIe form factor with Ethernet ports for FPGA clustering, are anticipated to make up 1.5 percent of worldwide server units in 2022. (Forecast does not include I/O cards used to connect servers to an Ethernet network.)
- Servers shipped with tensor processing units (TPUs) are projected to account for 0.8 percent of global server units in 2022.
- Servers shipped with other programmable co-processors are forecast to reach 1.5 percent of worldwide server units in 2022. Included in “other” are Xeon Phi co-processors, PEZYs, deep learning units (DLUs), neural network processor (NNPs), machine learning units (MLUs) and deep learning units (DLPs).
Data Center Server Equipment Market Tracker
Part of the IHS Markit Data Center Compute Intelligence Service, this report provides analysis and trends for data center servers, including form factors, server profiles, market segments and servers by CPU type and co-processors. The report also includes information about Ethernet network adapters, including analysis by adapter speed, CPU offload, form factors, use cases and market segments. Other information includes analysis and trends of multi-tenant server software by type (e.g., server virtualization and container software), market segments and server attach rates. Vendors tracked in this report include Dell, HPE, IBM, Inspur Broadcom, Cavium, VMware, RedHat, Docker and White Box OEM (e.g., QCT and WiWynn).