Posted on August 11, 2021 by Gento
This article from Raritan is part of a two-part series on High Performance Computing. This blog addresses challenges faced by Research I Universities when deploying high density data center applications that support HPC and supercomputers.
In the world of data centers and computing, there is a subculture built around the supercomputer—that noble computing beast whose processing abilities are expressed in an esoteric language foreign to most in the industry. Their capacity is expressed in “flops” and “cores,” and these computers earn names like battleships receiving their commissions: Fugaku, Sierra, and Summit are currently the largest at sea. More and more, this equipment is the domain of academic institutions and government labs, powering the compute projects of, and delivering the grant money to, doctoral universities that perform the highest level of research activity. These institutions, designated “Research I” by the Carnegie Foundation, are at the forefront of supercomputing applications.
Welcome to the World of Power Density
So, what can we learn from these institutions of higher learning? Once inside the walls of the data center powering these supercomputers, they seem to play by their own rules. High Performance Computing (HPC) equipment is at the top of the mountain in terms of power capacity needs, some edging into the territory of 50 to 70kW per rack, yet the need for redundancy is on the low end of the Tier scale. If a process is interrupted or a computing project halted in its tracks, no worries—you can just run it again.
Despite their unconventional relationship with redundancy, supercomputers in the higher education realm do have some special considerations:
HPC racks support the highest capacities in the industry, using the highest capacity power distribution, busway, and cabinet PDUs, even up to 480V which Raritan PDUs support.
HPC racks must be concerned with heat loads and airflow. Keeping the abundance of power cords and power distribution devices out of the way of the heat being rejected by the computer is essential.
HPC racks must contend with the highest temperatures at the back of the rack. The heat ratings of the cabling and rack mount PDUs must meet these higher requirements.
While the data center staff gets to take a breather from the usual rigor of uptime, they instead pay their dues by entering power density’s uncharted waters. Thinking through the details of what takes place at the back of the cabinet is paramount to HPC installations.
Pulling It Together on a Supercomputer Scale
Pondering heat, airflow, and power leads to deploying PDUs that support high compute with upwards of 68kW, mounting four or more PDUs at the rear of the cabinet, supporting distribution at higher voltages, and installing sensors to keep eyes on airflow temperatures. A tall order, but one that Research I data centers must tackle. Raritan’s PDUs offer key benefits, such as reliability, high power capabilities, and remote environmental and power monitoring.
Our next blog on HPC will cover some ways Raritan’s rack PDUs in Research I universities. Can’t wait until then? Contact us to learn more about HPC applications and how Raritan rack PDUs can power your Research I supercomputing needs by delivering high density rack power, rock solid uptime, and best-in-class security.