Posted on October 11, 2012 by Greg More
Today, data center equipment racks require more power than ever before. The reasons for this come under two general headings: the still-growing need for computing power and the push toward virtualization and consolidation.
The decision to ramp up power provisioning generally comes from senior management. This is often because systems start to become strained, threaten performance, and the need for more server equipment brings with it a higher power demand. In addition to reducing the additional cost of that power, management may want to try to reduce their power consumption for corporate green initiatives.
Computing Power Demands Keep Growing
Trading companies, for example, have massive compute power needs, as their competitive advantage relies on running many different algorithms on vast amounts of data with very fast processors. In this environment, tenths of a second can make a difference between profit and loss. Enormous data volumes, growing in storage area networks (SANs), are also driving the demand for high power. SANs also grow as industries are required to keep more data by such regulation as Sarbanes-Oxley.
Colocation Facilities, Servers are Consolidated
These two needs—for computing power and cost savings—may be met at the facility level by consolidating whole data centers and colocation sites. Think of social media and online auction companies that have grown so fast that they have been forced to deploy their IT assets in a number of rented facilities. Some of these companies now find that having data centers scattered all over really isn’t the most efficient way to cut down on power consumption or to manage equipment and personnel. So they consolidate data centers into fewer locations and reduce total power consumption, but in each remaining center, that power draw may actually go way up because they are more densely packing racks.
Up to 42 1U-format “pizza box” servers or a few high-performing blade servers may be stacked in a rack, and some racks go even higher. The densely concentrated processors can draw quite a bit of power.
Raising Power Delivery: Voltage or Amps?
Given the choice of raising power delivery to racks by upping voltage or amperage, voltage carries some significant advantages.
First, it may save you some rewiring. That’s because wire gauge isn’t determined by voltage; it’s determined by current. So if you can move from, say, a 120V, 20A power feed to a 208V, 20A power feed, you can use the exact same cable and deliver roughly 75% more power.
Admittedly, 208V single phase is not what we would consider high power these days. Today, high power probably falls more towards three-phase 208V, which delivers three circuits instead of just one, or even 400V, where we get three circuits of 230V at the PDU itself. So the higher voltages and these three-phase systems are a nice way to deliver a lot of power to a rack with little or no wiring changes, depending on what you already have and how you change the infrastructure in your data center.
From Single-phase to Three-phase Power Cabling
If you were going to change your voltage from, say, 208V single-phase to 208V three-phase, you would need to rewire and therefore temporarily move your servers onto a different area. The advantage here is that those three-phase wires are contained in one cable that’s only slightly larger than the single-phase cable it replaces. And that, in turn, is a relatively simple change, and you are not adding many new cables that block the cooling airflow under the data center’s raised floor.
It’s a great idea to make these kinds of changes when you add or change out capacity, bringing in the new servers and then cutting over from the old racks to the new. Over time, you proceed to change out those old servers as well.
Solving for Higher Wattage = Higher Heat
Another factor that you need to consider is heat. With IT equipment, 1 watt of power consumed generates 1 watt of heat, so as watts go up, so does your data center temperature. You may be able to deploy more power through higher voltage, for example, but your cooling system has to have enough capacity to take out this additional heat as well.
We’ve seen customers dramatically reduce their power consumption and solve the heat problem by retrofitting existing data centers. One in downtown Manhattan changed their cooling to include air-side economizers, which bring in cooler outside air instead of running air conditioning units. They also brought in efficiency through our equipment, and through more efficient IT equipment from various vendors. It took lots of planning, since as a financial services firm they could not miss a day of trading for cutover. Over more than a year, however, they pulled their power consumption down dramatically.