There was an article in the January edition of Water Energy & Environment with several intelligent insights into this problem, for example:
The root of the problem (argues the article) is that application architects have no idea how much power each application uses, let alone incentives to consider reducing power consumption. Some of these same observations were highlighted in an excellent 2014 NRDC report that also makes the point about aligning IT incentives with power conservation objectives.
I’d argue that the best way to conserve datacenter energy is to question the simplistic assumption that “cheap commodity servers” and scaling horizontally as necessary is the solution to every problem. You can cut sever count substantially by choosing higher density solutions. For our clients, deploying a single Solace appliance allows them to retire (or not buy in the first place) 10 to 50 servers running software-based messaging (the specific ratio depends on the use case and software they’re using). The energy footprint of a Solace appliance is approximately the same as a single server, so the energy saving is 90-98% across the messaging estate – with better performance, higher reliability and lower cost (you don’t have to buy those 50 servers). Some of our largest customers have more than 10, 000 servers primarily doing messaging, so the power and monetary savings can really add up. This one change doesn’t solve the runaway datacenter sprawl problem by itself, but it’s a step in the right direction.