From Sustainable IT to IT for Sustainability
Zhenhua Liu, HLF14 participant: Today it is recognized that data centers are a significant consumer of energy resources and a substantial source of greenhouse gas pollution. Statistics abound:
- Worldwide data centers consume as much electricity as the United Kingdom does.
- The Internet produces emissions comparable to the airline industry.
- An individual server has emissions nearly as large as a car.
- And IT companies such as Facebook spend millions each month on the electricity bills for their data centers.
Most tellingly, the growth rate of the electricity use of data centers in the United States is more than 12 times the growth rate of the total US electricity usage. Consequently typical stories surrounding data centers and energy are often extremely negative: Data centers are energy hogs.
Energy and sustainability have become one of the most critical issues of recent generations. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and unreliability present a daunting operating challenge for our somehow outdated electricity grids. To deal with the higher supply volatility which goes hand in hand with higher renewable usage, reserve capacity would be required to increase tremendously. This will essentially neutralize the benefits from the higher renewable generation.
The key idea behind my research is that these two challenges are in fact symbiotic: data centers can be virtual batteries for the electricity grid. Specifically the energy intensive loads of data centers are large but are also flexible – they can often be shifted in time, curtailed via quality degradation, or even shifted geographically. Additional flexibility can come from the control of cooling and the usage of power micro grid. If the grid can call on the flexibilities of data centers via demand response programs, this can be a crucial tool for easing the incorporation of renewable energy. There is a high potential of this interaction to lead to a “win-win” situation. Not at least as the financial benefits from data center participation in demand response programs can help ease the burden from the costs of skyrocketing energy usage. Our recent study together with Southern California Edison shows that a typical sized data center (20MW) can provide the same value to the grid as ∼1MWh of optimally placed, fast response storage. Taken worldwide, this places the potential of data center demand response at nearly $20 billions.
Unfortunately, despite this great potential, the current reality is that data centers perform little, if any, demand response. There are many reasons for this, but perhaps the biggest is the interdisciplinary challenge of both engineering and economics.
- At the local level, it is still an open problem to operate data centers under the great uncertainties brought by demand response programs, while still provide performance guarantee.
- At the global level, the demand response programs that exist today are not suited for the load profile and risk tolerance of data centers, for which availability and performance are crucial concerns.
These two challenges are closely coupled and have to be solved simultaneously. Further, large data centers are significant consumers for local utility companies, E.g. Facebook Oregon data center consumes as much electricity as the local county does, and therefore have large market power, which is not well handled in existing programs.
My overarching research goal on this topic is to develop an intellectual framework to understand and guide the realization of demand response from cloud data centers, to address engineering and economic challenges, in order to capture this historic opportunity and manage the daunting risk.
In particular, my research seeks
(i) to quantify the economic and environmental potential of demand response from cloud data centers, and
(ii) to tackle the algorithmic and economic challenges for cloud data centers to participate demand response programs.
Thus, the results of this research will
(i) at a global level, help utility companies and load serving entities realize the great potential that lies in the Cloud, and furthermore, design demand response programs that provide right incentives for data center operators to participate;
(ii) at a local level, help guide the management of (geographically distributed) data centers in participating the right demand response programs, and deal with risk management and distributed control.
Now it is my great pleasure to attend the 2nd HLF and having the chance to interact with the world’s brightest minds. Anybody else working in my fields, please let’s chat!
Zhenhua Liu is currently assistant professor of applied math and computer science at Stony Brook University (on leave for the ITRI-Rosenfeld Fellowship at Lawrence Berkeley National Laboratory in 2014-15). He received his PhD in Computer Science in June 2014 from California Institute of Technology (Caltech), where he was co-advised by Prof. Adam Wierman and Prof. Steven Low. His research interests include networking and systems, renewable energy integration, and cloud-based platforms for big data and energy management, and optimization and scheduling. His PhD work is widely cited and recognized in academia, including the Best Paper Awards at ACM GreenMetrics and IEEE Green Computing Conference, and the Pick of the Month award by IEEE Special Technical Committee on Sustainable Computing. At the industrial level, he helped HP to design and implement the industry’s first Net-zero Energy Data Center, which was named a 2013 Computerworld Honors Laureate.