Love them or hate them, you have to give Google credit for trying to find ways to optimize the way they use energy. With a company as large as they are and with the amount of servers they use it’s important to both their bottom line and the environment that they make things run smoothly. Today they released a white paper detailing how they are using “machine learning” to optimize their energy use in data centers and drive consumption even lower.
It’s no secret that we’re obsessed with saving energy. For over a decade we’ve been designing and building data centers that use half the energy of a typical data center, and we’re always looking for ways to reduce our energy use even further. In our pursuit of extreme efficiency, we’ve hit upon a new tool: machine learning. Today we’re releasing a white paper (PDF) on how we’re using neural networks to optimize data center operations and drive our energy use to new lows.
It all started as a 20 percent project, a Google tradition of carving out time for work that falls outside of one’s official job description. Jim Gao, an engineer on our data center team, is well-acquainted with the operational data we gather daily in the course of running our data centers. We calculate PUE, a measure of energy efficiency, every 30 seconds, and we’re constantly tracking things like total IT load (the amount of energy our servers and networking equipment are using at any time), outside air temperature (which affects how our cooling towers work) and the levels at which we set our mechanical and cooling equipment. Being a smart guy—our affectionate nickname for him is “Boy Genius”—Jim realized that we could be doing more with this data. He studied up on machine learning and started building models to predict—and improve—data center performance.
Be sure to hit the source link below to read the entire Google blog post and check out this video from CBS for an inside look into Google’s Data Center.
Source: Google Blog