Modular data center infrastructure has become synonymous with energy efficiency with companies like Facebook and Google incorporating a modular approach into delivering better and more energy efficient services than ever before.
But what is it about a modular data center that makes the system that makes it such a green data center solution.
The answer lies in the materials that are used and the way the system is designed and configured. This blog post focusses on data center cooling and the energy efficiency of the cooling used in the Datapod System. Specifically, this post discusses the internal design elements, and we’ll discuss the topic of geographical and climatic considerations (for example, economizers and free-air cooling) in the next blog in this series.
According to a study commissioned by the NY Times from Jonathan Koomey Ph.D. Stanford entitled, Growth in Data center electricity use 2005 to 2010, in 2010 the total electricity use by data centers accounted for about 1.3% of all electricity use for the world, and 2% of all electricity use for the US.
This combined with 30-60% of the energy costs in data centers being attributable to the cooling infrastructure means significant environment and economic saving can be made if cooling is made more efficient.
As computing equipment density increases more heat is being generated in data centers. Cooling and heat rejection equipment is used to collect and transport this unwanted heat energy to the outside atmosphere. How we remove this heat is central to the enabling us to improve energy efficiency and to thereby lower the operational costs of the data center.
Key factors in modular data center energy efficiency
The cooling of a modular data center is defined by three key factors beginning with the location of cooling units in proximity to the IT equipment, the method of heat removal and the method of air distribution.
The modular approach means essential cooling and power components are intrinsically located closer to IT equipment and feature an innovative design to deliver a more energy efficient outcome. The close proximity means inefficiencies and energy consumption is greatly reduced and is the single most important design factor making a data center more energy efficient.
Internally, cooling inefficiencies like air stratification, bypass air and re-circulation are eliminated through innovative modular design. An example of this is the use of ‘close-coupled’ design of the Inrow cooling units.
The Inrow cooling operate not as a traditional cooling unit, but as a ‘heat-capture’ device. By successfully capturing all heat exhausted by the IT racks into the hot-aisle, the Inrow cooling units ensure that the remainder of the data hall remains at the ambient temperature. The ambient temperature is the supply temperature for the IT equipment. The Inrow cooling units can use a number of cooling methods depending on the density of the data center, including chilled water, a refrigerant or condensed water, all of which enable operation in a contained environment with both a ‘group’ and ‘individual’ operating mode.
The modular nature of the Datapod System also means the data hall can be configured as one contiguous array of racks positioned back-to-back and separated by Inrow cooling units where necessary. This configuration allows for an optimal airflow and with sealed insulated entry into the central aisle (hot aisle) and assures segregation of hot (exhaust) air and cool (supply) air.
Hot and Cold Aisle Containment Systems
Datapod use a Hot-Aisle Containment System (HACS) to improve the efficiency of the cooling systems by close-coupling the heat-capture to the source of the heat. The Datapod goes one step further to further boost energy efficiency by effectively creating a Cold-Aisle Containment System (CACS) which further improves efficiency outcomes.
The pairing of these two approaches has created a more efficient design, as heat dissipated by the IT systems can be effectively ‘captured’, whilst the cool air created is efficiently channelled to where it is needed enabling the data center operator to run racks at up to thirty five kilowatts each compared to the very low densities typical of traditional data centers.
According to Datacenter Dynamics in their article ‘The evolution of design and data center cooling, more efficient cooling can significantly reduce energy spending.
In the article author John Collins says legacy cooling schemes typically chill return air to 55ºF/12.78ºC while Containment-based cooling systems completely isolate return air, however, so they can safely deliver supply air at 65ºF/18.34ºC or higher. As a result, containment cooling strategies typically reduce Computer Room Cooling Air Conditioning (CRAC) unit power consumption by an average of 16%.
If you would like more information about the Datapod System download the Datapod Components Guide.