The Raritan Blog

Cooling IT in Data Centers

Saman Berookhim
December 20, 2021

Cooling IT in Data Centers

 

What goes up must come down. While rack and cabinet computer densities are increasing in the data center space, so are the temperatures.

 

At the heart of rising rack densities are newer generations of central processing units (CPUs) and graphics processing units (GPUs) that generate higher thermal power densities in the servers. With multiple active components in a rack, cooling air through traditional methods such as in-room, in-row, or rack-based containment systems can no longer take the heat as rack densities rise above 20kW. Data centers require 24/7, 365 days per year heat rejection systems without failure. Air-cooled systems have peaked, and liquid cooling technology is becoming the mainstay to bring those temps down as high-density racks are reaching up to 30kW of power and beyond.

Considerations when selecting the proper cooling systems include:
•    Computer room size
•    Maximum kW load per cabinet
•    Number of cooling units
•    Room location relative to outdoors
•    Ceiling and access floor heights and 
•    Future expansion needs 

There is no single cooling solution that is appropriate for all data centers. But to understand why liquid cooling methods are gaining attention, let's take a step back and look at the different systems.  

Air-cooling systems  
An essential part of thermal management of air-cooled systems is air management. Legacy air-cooling systems such as computer room air conditioners (CRAC) or computer room air handlers (CRAH) were designed to cool the entire room. Typically, this method could cool each rack to 15kW. Air distribution includes underfloor supply, open room returns, supply and returns, and vertical heat collars to direct the hot air to the plenum return areas.  

In-row air-cooling systems are usually placed  near servers and racks and use hot/cold aisle containment methods. Row-integrated cooling pushes cool air in the front and discharges hot exhaust air out the back of the racks and cabinets. By placing the cooling units close to the heat source, the hot air return path length to the air conditioner or handler (CRAC or CRAH) can be significantly reduced, minimizing the potential for mixing hot and cold airstreams. 

A rack-based system dedicates cooling units to specific racks. These can be mounted directly to or within the IT racks. Airflow paths are even shorter than those in-row-based cooling systems, and airflow is not affected by objects within the room or by room constraints, such as doorways. Rack-based cooling costs can be expensive due to the increase in the number of cooling units, resulting in more power consumption.

Liquid Cooling Ramps Up
Liquid cooling, through water or other dielectric liquids such as mineral oils or glycol, have a significantly higher heat capacity than air by volume, which means it can handle higher-density facilities. Other benefits include reducing energy consumption which lowers OPEX, is quieter than air cooling systems, and takes up less space in the data center.

Various liquid-cooling technologies have emerged to meet the cooling requirements of high-density racks and cabinets, including direct-to-chip cooling, rack/server cooling, and immersion cooling. These work by pumping cold liquid close to or directly over hardware to carry out the heat exchange. The fluid constantly circulates, removing heat almost as quickly as it's generated. 

In direct-to-chip cooling, a liquid coolant is brought through tubes directly to the electronics, such as the CPU or GPU, where the heat is absorbed. The fluid never makes direct contact with electronics but actually "absorbs" the heat by turning it into vapor, which then carries the heat out of the IT equipment through evaporation. 

With rack liquid-cooling techniques, the rear door of the rack is replaced with a liquid heat exchanger. With a passive design, fans expel heated air through a liquid-filled coil mounted in place of the rear door of the rack. The coil absorbs the heat before the air passes into the data center. Active heat exchangers include fans to pull air through the coils and remove heat from even higher density racks.

Immersion cooling, both single- and two-phase, is making headway in the data center space. Immersion cooling involves submerging hardware (servers or even processors) in special dielectric liquids that conduct heat but not electricity. Single-phase involves immersing electronic components in dielectric liquid in a sealed but readily-accessible enclosure, where heat from the components is transferred to the fluid. Pumps are often used to move the heated fluid to a heat exchanger, where it is cooled and cycled back into the enclosure. In two-phase immersion cooling, fluid is boiled and condensed, exponentially increasing heat transfer efficiency. Electronic components are directly immersed in dielectric liquid in a sealed enclosure, producing vapor that rises from the liquid. The vapor condenses on a heat exchanger (also called condenser) within the tank, returning the fluid once more to its cooling point. 

Decisions, decisions
Liquid cooling requires additional capital expenditure in hardware, expensive liquids, safety gear, and specialized training to understand how to implement and work with these systems. With the next generation of processors and GPUs expected to evolve rapidly, it may be a challenge to incorporate the newer hardware into existing traditional air-cooled servers. IT management and data center designers will face with determining whether their servers emit enough heat to justify the investment of implementing liquid cooling technology to support the higher-density racks. Inevitably that includes weighing the initial CAPEX against the long-term OPEX savings through a data center's life.