Guest blog by Terry Vergon: Building Mission Critical Facilities Organizations

Mar 12, 2014

Share this News

From a data-center-space perspective, regardless of the temperature, the heat energy generated by the servers and equipment must be transported or expelled from the area. It is in this process that you can either make the system efficient or not. Consider the following diagram: 100kwdcheatflowmechanical In this conventional design, it doesn’t matter what temperature the data center is held at, there still needs to be 100KW of heat removed. This moving of heat energy costs energy. If we have a centrifugal chiller, it takes about 0.6 KW per ton to remove this heat. So if we use this example: 341200 BTU/hr = 28.4 tons = 17 KW of electrical power This is the power to remove that heat with the chiller; but we also have cooling towers, pumps, and other equipment to consider. For most systems, you can add an additional 25 percent of the chiller load for the supporting-equipment load. In this case, that would be 5.7 KW, for a total of 22.7 KW. So the bottom line is that to run a data center of 100 KW, we need to pay for an additional 23KW of cooling. What this means is that, for a 5- to 6-MW data center, we would typically see about 1 to 1.2 MW of that total load just for cooling. This can sometimes translate to $50- to $60,000 per month just to move heat out of the building. So how does this all play with raising data center temperature? It depends on how you remove the heat load and how much of that “moving heat cost” you can reduce. In an ideal situation – and many of the newer, huge data centers are moving in this direction – you would have the following: 100kwdcheatflowmechanical2 In the ideal situation, there is no cost incurred for mechanical cooling systems – no pumps, no towers, no chillers – however, you are totally at the mercy of the environment and ambient temperature of the incoming air to remove your heat energy. Several experiments have exposed servers to these conditions, but the findings held that relying on the environment to provide cooling generally resulted in reduced server equipment lifetimes. [Note: As manufacturers are making their equipment more robust and capable of handling greater temperature extremes and changes, a natural-convection model may well become the norm in the future.] For now, the best solution is normally a combination of both natural (economization, etc.) and mechanical processes; but for this to work and take advantage of the environmental cooling available, the acceptable temperature of the data center should be set to maximize effectiveness (maximize cost savings). 100kwdcheatflowmechanical3 In this scenario (data center with economization), the temperature we desire to maintain at the data center directly affects how much we can use outside air for cooling. In a location like Phoenix, if our data center supply air temperature is set at 62F, we can use outside air to cool the data center for approximately 3 months of the year. If we set the temperature to 72F, this increases to approximately 6.5 months of the year. This change represents a potential cost savings of $120,000 to $160,000 a year. Average outside air temperatures plays an important role in data center costs, and so it pays to find a location that allows you to utilize the outside air for cooling. (This explains why the northern latitudes are sought-after data-center locations.)  Many areas in the northern United States can support almost 8,000 hours a year in free cooling with a data center supply air temperature of 72F. So simply raising the temperature of the data center will not give you the large reduction in costs. Couple free cooling (economization) with raising data center temperatures and you should see significant savings. Guest blog by industry veteran Terry Vergon – see Terry's blog here