The Open Compute Project (OCP) recently commissioned IHS Markit to analyze the current and future state of data center energy efficiency in the European market. For the study, which was concluded in September 2018, IHS Markit interviewed both data center equipment suppliers and data center operators. The findings presented in this research note reflect the combined answers of the operators and equipment suppliers due to consistency between the groups’ responses.
- When asked what recent improvements they have made to their data center's energy efficiency, survey respondents most often cited cooling improvements.
- Regarding the average energy savings realized as a result of improvements made to the data center, the largest share of respondents (33 percent) reported saving between 21 percent and 30 percent on their energy usage.
- The top driver for improving energy efficiency in data centers among those polled was a reduction in operational expenses (60 percent of respondents). Conversely, the leading barriers were budget and the inability to replace legacy equipment – tied at 30 percent of respondents each.
- When asked about their plans for improving data center energy efficiency over the next two years, respondents’ top two responses were heat energy reuse and heatsink cooling (22 percent each).
Much has been written about energy efficiency in data centers, but there has been a noticeable lack of quantifiable research on the topic. For this reason, the OCP commissioned IHS Markit to study which efficiency measures companies have already implemented and what they are looking to do in the future.
Cooling, then compute
IHS Markit kicked off the survey by asking participants about the recent improvements they’ve made to improve data center energy efficiency. Respondents most often indicated that they increased the use of free air cooling, which means they grew their use of air handlers or non-compressor-based cooling equipment – something that’s becoming mainstream in the industry. Other frequently mentioned cooling improvements include the installation of containment panels and increases in server inlet temperatures.
Beyond cooling, the next most frequently mentioned improvement was compute efficiency. Addressing the equipment (servers) that is doing the important work is indeed a next logical step. Companies are seeking to get the most out of the servers they have – readying them to not only manage existing workloads, but also the increasingly processing-intensive business of artificial intelligence (AI) and machine learning (ML).
Respondents realized significant savings
The average energy savings realized as a result of the various improvements made to the data center were significant. One-third of companies polled reported saving up to 30 percent in energy usage. As expected, the top improvements cited had a lot to do with cooling systems, as they can account for up to half of a data center’s energy consumption; addressing cooling efficiency can yield big results.
Other areas where efficiencies were made include:
- The use of OCP servers and 12V servers, which reduces the number of voltages used in the data center.
- Increased rack density, which is helpful when switching to alternative cooling methods such as immersion cooling.
Operational expenses must come down
Understandably, a majority of respondents cited improved total cost of ownership (TCO) and/or reduced operational expenses as their primary driver for improving energy efficiency in the data center. After all, companies who make an investment in new equipment want to see some savings. The second and third drivers share this same theme of “doing more with less” – increasing compute without adding more utility power, and maximizing an existing data center’s footprint (going taller or denser to prevent having to build out new data center space).
Of note, nearly a third of companies surveyed named social responsibility as a major driver; reducing their carbon footprint is important to these businesses.
Budget constraints, legacy equipment and switching vendors impede progress
On the flip side, when looking at some of the barriers for improving energy efficiency, the bottom line is still money. Many companies just don’t have the budget to make improvements. But there is also an issue with replacing legacy equipment, as there may be applications that can’t be moved or were designed for specific equipment that is no longer available.
Additionally, nearly a quarter of respondents noted “lack of support from traditional vendors” as a barrier. What they are referring to here is what many companies are trying to avoid: vendor lock-in. For example, a colocation provider may have a long-standing agreement with a major infrastructure supplier – and that supplier has not evolved its product portfolio to the point where efficiency is a differentiator.
Liquid cooling is more than just a science experiment
Looking ahead, respondents’ highest-rated plans for improving data center efficiency over the next two years are heat energy reuse and liquid heatsink cooling. These are on the cutting edge, particularly liquid heatsink cooling, which still very niche in terms of adoption.
Heat energy reuse is not a new concept – IBM made headlines 10 years ago for using waste heat from a data center to warm a community swimming pool – but it’s an interesting topic in the context of this study, primarily because it doesn’t actually help the data center save energy. It is more of a sustainability consideration – organizations thinking about how to reduce their effects on the climate versus reducing their bottom line. Heat energy reuse also involves a lot of coordination with other entities in the community. Data center managers and operators need to look outside their own company and industry to find partners for these endeavors, and it does involve some extra work.
A newer concept is liquid heatsink cooling. Instead of the more traditional approach of cooling by pushing cold air through the servers, liquid heatsink cooling brings liquid directly to the chip. Google recently announced that it is using direct-to-chip cooling for the first time in its data centers to cool tensor processing units (TPUs) used for ML and AI. TPUs are extremely powerful processors that consumer a lot of power and generate a good amount of heat.
OCP European Data Center Energy Analysis Report - 2018
Additional findings from the study can be found in the OCP European Data Center Energy Analysis Report, which is available as part of the following IHS Markit Intelligence Services: Multi-Tenant Data Centers, Data Center Rack Systems and Uninterruptible Power Supplies (UPS).