Thursday, November 19, 2009

Cutting the Electric Bill for Internet-Scale Systems

This was an interesting paper, and a bit of a shift from the types of papers we have read so far in the class. There are two main motivators for the paper:
  1. Internet-Scale systems are already widely replicated. This means major providers (the googles and amazons and so on) already have data centers in varied geographic locations and can route traffic for a particular application to any of a number of those data centers. In short, a request can be serviced (almost) equally by any of a number of data centers.
  2. Power does not cost the same amount in disparate geographic locations. The paper presents a lot of data on this front and determines that there is a large hour-by-hour pricing in differential between energy markets
Given the about motivators, it makes sense for a Internet-Scale system to route requests to data centers where power is as cheap as possible. This of course assumes that keeping data centers at low load allows one to save power. This is called the energy elasticity in the paper, and the authors claim that it is about 60% at the moment (that is, systems use about 60% of the power they use under load when then are idle). This is not a great number, but this is an active area of research and one can probably safely assume that it will only get better. Systems that can actually turn off machines under low load could help to achieve perfect (0%) elasticity.

The authors use numbers from Akami to run some simulations and show that significant savings are possible. One roadblock is the 95/5 billing model where traffic is divided into 5 minute intervals and the 95th percentile is used for billing. This inhibits the amount of work the system can actually offload. By relaxing this the authors can do quite a bit better.

I thought this was an interesting paper and one I would keep on the syllabus.

No comments:

Post a Comment