Jeff Atwood notes that When Hardware is Free, Power is Expensive:
Over the last three generations of Google’s computing infrastructure, performance has nearly doubled, Barroso said. But because performance per watt remained nearly unchanged, that means electricity consumption has also almost doubled.
Thus, Google has decided to build its data centers near cheap power.
It has also decided to build its servers using especially efficient power supplies:
The power supply to servers is one place that energy is unnecessarily lost. One-third of the electricity running through a typical power supply leaks out as heat, [Hölzle] said. That’s a waste of energy and also creates additional costs in the cooling necessary because of the heat added to a building.Rather than waste the electricity and incur the additional costs for cooling, Google has power supplies specially made that are 90% efficient.
Of course, the efficient power supplies Google uses are efficient at fairly high loads, which home users rarely hit:
Unless you’re a gamer, you won’t even come close to 200 watts of power usage, even under full load. And how often is your PC operating at full load? If you’re like most users, almost never. Your PC is statistically idle 99% of the time it is turned on. Idle power consumption for a typical desktop PC ranges between 120 and 150 watts. Thus, the real challenge is to deliver 90%+ efficiency at typical idle power consumption levels — 120–150 watts.
At the “insanely expensive California power rates” in his area, Atwood would save $30 per year with a more efficient power supply — which doesn’t justify buying a new power supply. Some people, like commenter Daniel Lehmann, don’t get it though:
The first priority should always be energy savings, not your personal money issues…
I found this comment intriguing:
Cisco’s newest labs have most devices running off DC (48V). They don’t have the overhead (heat) of AC-DC conversion in every device.