Hacker News new | past | comments | ask | show | jobs | submit login
The 100 Degree Data Center (datacenterknowledge.com)
10 points by 1SockChuck on March 19, 2009 | hide | past | favorite | 7 comments



When first reading the title my first thought was, "Wow. A data center so hot that water will boil." I felt cheated when I realised it was only 100 degrees Fahrenheit and not 100 degrees Celsius.

Having said that it still raises some interesting implications. As data centers get hotter how will humans work in them? Will there be redundant cooling capacity which is only used when people are working in the data center? Will people be limited to 20 minute shifts? Will special suits with cooling be used?


Fortunately there are a lot of electronic components that can't take the kind of heat you allude to (100C), and it would cost a lot to engineer components that could. 100F is a hot day in the southern US, and nothing compared to some of the warmer climates of the world. Compared to digging ditches or doing brick work, standing next to a rack loading code onto a router is small potatoes.

I'll just be happy to work in a room where I can expect to be in short sleeve shirts instead of having my fingers get numb when near a cold air vent.

If heat were ever to become that much of a problem I'd expect that we'd start seeing more solutions like HP's thermal management rack, where the racks are closed to the environment, take in cold water, and circulate hot water out.


So-called "cloud", "warehouse", and "megascale" data centers are designed to be pretty hands-off. 20-minute shifts would probably work.


Is there a way data centers could use all the heat generated to create more energy?


It's very hard to use such low-temp heat for anything useful... except possibly heating the rest of the building and (pre-heating) hot water.


IBM is experimenting with reselling waste heat: http://www.research.ibm.com/journal/abstracts/rd/533/brunsch...

In theory you might be able to run an absorption chiller on server waste heat, but I've only seen it done with generators.


I think it's more cost effective to just design components that are more efficient. I believe there is a whitepaper that was published by Google's datacenter engineers that points this out. I can't find it right now.

Edit: Sorry about the appeal to authority... It can be explained in simpler terms. It's more cost effective to put less energy in the system initially than trying to gather some of the wasted energy after the fact.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: