Tsohost Blog, News & Announcements

Data Centre Build: Managing Airflow

Posted Wednesday 09th Oct, 2013 by

9 Comments

At full capacity our 9,300 sq ft data hall will house 3,600 high spec servers that in total require a megawatt of electricity - that's enough to power a whole street of houses.

One of the biggest challenges we face is managing the humidity and temperature within this environment to ensure that our power consumption and impact on the environment is kept to a minimum, whilst ensuring optimal performance from our hardware.

To maintain consistent component temperatures, the internal fans within a server draw cold air in from the front and exhaust to the rear. We therefore have to provide a constant supply of cool air to the front of each rack without unnecessarily cooling the surrounding space. We achieve this by utilising the latest cold aisle containment technology. The cold containment unit sits over the top of the racks creating a 'bank' of cold air for the servers to draw from. Air is delivered from the cooling system to the cold aisle through the 40CM raised floor we have installed throughout the data centre. The floor void is pressurised slightly more than the atmosphere and air then exits through the vented floor tiles installed in the cost aisle.

Inside the cold aisle containment the temperature will be approximately 22 degrees. Once heated by the servers the temperature returned to the cooling units will be relatively warm, approximately ~30c. We will continue to adjust settings and temperatures within the cooling system over time to achieve the maximum possible efficiency.


Leave A Comment

As we move towards more efficient processors with better temperature management (Haswell, Broadwell etc) and SSDs instead of HDDs, I wondered in cooling requirements would reduce. Certainly my new Haswell PC and Macbook seem to run much cooler and quieter.

@Alan, the point of having the server fronts in the cold aisle is they draw air in at optimum temperatures which means the cooling system can be tweaked to output at higher temperatures.  Saving money and the environment, which considering the vast datacentres globally is a requirement these days.
The efficiency comes from cold aisle containment provide cooling only to the parts in a datacentre which need cooling, ie the server intakes, also added by the return air being able to be used to heat other things - offices, nearby flats, etc

The issue you pointed out with working in cold aisles isn’t ideal as working for hours on end you can get cold, and it’s often dry air too. But 22 or 23 degrees isn’t too bad. You get used to it.

In terms of containment efficiency, there will be air escape when you open/close doors, and some around gaps between server and cabinet, as there is no perfect scenario to cooling efficiency as you can’t have a 0% leakage scenario.
However, cold aisle containment is vastly more efficient than without, as without you’re either trying to cool the entire room to 22 degrees (without under floor supply) which is highly inneficient, or even with under floor supply through vents to server fronts, without containment the hot and cold air mixes and the servers don’t get the optimum temperatures.
Then controlling the air flow and temperatures becomes an ongoing job to manage having to change the supply temps if people are working in there or the ambient temperature drops or rises. Rather than with containment, you can usually balance the supply temperature only needing a few tweaks with seasons change.

Very interesting. I see from the diagram and text that the front of the racks are in the “cool side” which is where most maintenance tasks are conducted therefore every time something requires attention the air flow is disrupted by opening the door. I understand the servers/cabinet fans themselves would not Push/Pull the air in the other direction thereby allowing the cool side to be at the rear but if they did the maintenance could be carried out so much easier without the cool area being disturbed. Just a thought.

@ Will - That was what the data centre guys thought they had done, but alas the cold airflow still found a way!

We had one instance where there was a flow of cooler air that defied the intended route it was meant to take. As demonstrated in the impressive graphic you have made above, the cooler air was coming out from the CRAC units, flowing under the floor into the cold aisle, to then snake it’s way across the ceiling and straight back into the CRAC unit, so essentially so CRAC unit was just cooling the already cold air!

Do you use thermal imaging to monitor the air flow at all? Recently we inspected a data centre in Fareham and although they had the hot and cold aisles, there were some anomalies where the airflow did not quite match up to the model.

If you house newer servers, you can save even more in your wallet and the environment by raising the supply air temp to around 25 or 26 degrees (even more really). The old magical 21-22 number is fading with newer servers being designed to be efficient internally even when getting higher supply air temps.

Of course this depends on the efficiency of the air con units too, if they have to use condensors to cool due to higher return air temps then it might not save. But I’m sure you’ve done all these calculations ;)
In the winter, if you have dry coolers outside, then you can usually get away with raising the supply temp to 25.

All looking good though! More reasons I’m glad I switched hosting to you :)

This is really good stuff.  How many ACU’s can fail before it becomes an problem?

Takes me back to the early 80’s and my days in computer rooms, working mainly with IBM mainframes… water and air chilling was BIG thing back then involving huge equipment almost the same size as the mainframes themselves. Good to see that things haven’t changed too much… maybe there’s a job there for me still lol.