The Raritan Blog

It's NOT about the Aisles, it's about the Inlets!

September 8, 2010

So many people have lost sight of the fact that cooling within a data center is not really a philosophical activity, it’s about moving heat away from the sensitive electronics inside various types of equipment within a temperature range specified by the manufacturer.  Unfortunately, the smallest unit or element within this world is typically a semiconductor of some type, most commonly a chip buried deep inside a circuit board, buried deep inside an enclosure, deep inside a rack, somewhere inside a row or pod, and ultimately within a room. The trick is, how can one cost-effectively move heat away from these components? How do we move heat away from JUST these components?  It is absolutely true that the most cost effective cooling would be using some technology that directly removed heat just from the devices that needed it. There really is no need to cool everything, the circuit boards or the cabinets or the racks or even the rows, IF we had the ability to remove heat from heat-generating active devices at the chip level. But, that is VERY, VERY hard to do cost-effectively.

Various approaches have been attempted over the years, using combinations and exotic applications of air and liquid plumbing and channeling schemes, trying to minimize the amount of collateral cooling, but in the end, the vast majority of us simply focus on moving vast amounts of air through the aisles closest to each piece of equipment’s inlet air vent, which in turn gets sucked into active devices by more fans through these inlets. The simple goal is to move enough air which is lower in temperature across the whole surrounding environment so that the chips can comfortably bathe in. Tons of inefficiency (or should that be tonnes?), but years and years of tried and true best practices, experience, and products that do just that.

 

A couple of details about moving air are important here.

  • Air FANS account for almost 47% of all energy used to cool a data center. This is huge! We all think about exotic condensers and chillers, Freon and other chemicals, but the reality is the fans used to generate air flow and move heat account for almost half of all cooling energy. FANs are VERY important. There has been a quiet revolution over the past 5 years to deploy Variable Speed FAN drive controllers to address just this opportunity. A quick fix so to speak. As a rule of thumb, if a fan is operated at 80% speed, it consumes 50% of the power! (It’s called the “Rule of Cubes”)
  • Most Active Equipment includes FANS inside the enclosures. Most of these fans operate in at least 2 speeds, LO and HI. The most common first transition point for these fans is about 76-degrees. Yes, across all manufacturers, they all use about the same figure. Inlet air at 76-degrees or less allows the internal fans to operate at LO speed. Anything above that and the fans kick into HI. Does it matter? You Bet! It’s the Variable speed issue all over again. FANS typically may operate at 60%-80% speed in the lower speed, and 100% in the higher speed. 

So, what can we all do? Try to keep the INLET temperature for all active devices at 76-degrees. Period. NOT higher or Lower. Treat this as a magic number. Any lower (than 76-degrees) and it is wasting energy. Anything above 76-degrees and the internal FANS kick into HI speed and waste a ton of energy. The core goal today would be to get the inlet temperatures for ALL devices to 76-degrees. (And use active real-time monitoring like the Raritan PX-series environmental monitors to assure this happens)

 

Lastly, you will be rewarded! As a rule of thumb, when changing data center temperatures over the ASHRAE TC9 specified range, every degree of cooling can be estimated at 4% in energy costs. Just turning UP the temperature by 1-degree could save 4% off your energy bill. BUT remember that magic 76-degree figure or you may bite off more than you can chew!

Other Blog Posts

The Rapid Growth of AI and the Use of Raritan PDUs to Meet Higher Power Demands
Posted on October 11, 2023
Data Center Report Fewer Outages, But Downtime Still Costly
Posted on September 20, 2023
Survey: Energy Usage and Staffing Shortages Challenge Data Centers
Posted on September 20, 2023
Raritan Secure Switch: Secure NIAP 4.0 Compliant Desktop KVM
Posted on September 20, 2023
The Midwest is a Hot Market for Data Centers: How the New Generation of Intelligent Rack PDUs Can Save Cloud Giants Uptime and Money
Posted on September 7, 2023

View all Blog Posts

Subscribe


Upcoming Events

Advancing Data Center Construction West 2024
May 6 – 8  •  Salt Lake City, UT
Net Zero Data Center
May 16 – 17  •  Dallas, TX
7x24 Exchange Spring
June 9th  •  JW Marriott Orlando Grande Lakes

View all Events

Latest Raritan News

Legrand Certifications and Process Controls Provide Confidence in Information Security for Network-Connected Devices in Data-Related Applications
Posted on April 1, 2024
Legrand Releases Version 4.0 of Raritan’s Industry-Leading Secure KVM Switches, Raising Bar for Secure Desktop Access
Posted on July 31, 2023
Legrand Revitalizes Data Center Sector with Two Revolutionary Intelligent Rack PDUs
Posted on May 1, 2023
Raritan Reveals The MasterConsole® Digital Dual KVM Switch
Posted on February 18, 2021
Legrand Data, Power and Control Division Announced as Finalist in Six Categories at DCS Awards 2020
Posted on November 9, 2020

View all news