Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I am sure they are. That is the rational for the 'PUE' adjustment.

Typical data center charges a power 'rate' for your actual power (either metered or fixed) and then a 'environment' or 'rent' charge which covers the cost of power they are buying to keep your stuff cool. That ratio, the power you use in your machines, vs the total power you use inclusive of the datacenter is the "PUE Factor". Old school data centers, which were built on a model similar to a mainframe 'machine room' that larger corporations had, had very expensive and clunky cooling which meant that while you paid maybe 11 cents for a kW of power to compute, the datacenter was charging you 25 cents per kW to remove the that heat, so the ratio would be '(25+11)/11' or 3.27! facilities built in the 90's had ratios in the 2.2 - 2.7 range, built in the oughts its usually 1.8 - 2.3 and built in the 10's its closer to 1.3 - 1.8. Dedicated facilities like Facebook, Amazon, and Google build get even better ratios.

Another thing that these places will offer is called 'smart hands' or 'remote hands' or 'tech on demand' who for $50 - $150/hr will go out and swap out a part that you've drop shipped to the place. Assuming you've got power strips that you can remotely power cycle and IPMI boards for doing 'boot from BIOS' type work you generally need no staff at all on-site, so your grossed up power/cooling/rent charge is all you end up paying. This makes it easy for ops guys like me to compare options, which range from a low of about $150/kWh-month to $250/kWh-month (some 'retail' co-location facilites go as high as $600/kWh-month but that is not a bulk deal for someone like AOL or a search engine like Blekko) So at the low end of $150/kWh-month that is (150 x 12 or $1,800/yr and for a 1U server pulling < 500 watts net cost around $750/year in all up data center 'recurring' costs). At $250/kw*month its only $1,500 a year. My estimate at $1,084/year earlier is pretty achievable for anyone putting 9,500 machines into a data center.

[edit: remove asterisks]



Interesting. Do you have pointers to some background material? I'd like to see what made cooling more efficient. Thanks!


Well there are Google's papers they presented at the data center summit and the whole open compute project which touches on this as well.

Simply put the evolution has followed this path:

Machine Room -> air temp is 68 degrees 24/7/365

Modified Machine Room -> rows of machines are lined up alternately facing toward each other and away from each other, cold air is preferentially directed up from the floors in the 'cold' aisles.

Tolerance Limits -> Google and others establish that 'commodity' machines work just fine at an ambient temperature of 80 - 90 degrees F so they cut back on the level of cooling, substitute external air when its cooler than 75 degrees outside.

Full containment -> various systems to provide cooling just to the active hardware, places like SwitchNap in Vegas build structures around the rows, third parties put plastic enclosures around rows to contain cold air, or to force all hot air out of a plenum.

Most of the 'win' has been reducing the temperature differential between the data center and ambient air, and reducing the volume of air that has to be cooled.

Once you do that, alternate air cooling methods (like evaporation cooling) can be used rather than compressor chillers.

[1] http://www.google.com/about/datacenters/best-practices.html (they have additional tricks up their sleeve)

[2] http://opencompute.org/project_category/data-center-technolo...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: