Data center energy efficiency

10 downloads 53135 Views 5MB Size Report
It is not feasible to create an entire data center given the time and budget constraints of this ...... Linux based server operating systems ran the top command during the ..... data center supporting its web hosting and ISP services.” (Miller, 2009, p ...
i

DATA CENTER ENERGY EFFICIENCY

A Dissertation Submitted to the Faculty of Purdue University by Heather Brotherton

In Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

August 2014 Purdue University West Lafayette, Indiana

UMI Number: 3668664

All rights reserved INFORMATION TO ALL USERS The quality of this reproduction is dependent upon the quality of the copy submitted. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if material had to be removed, a note will indicate the deletion.

UMI 3668664 Published by ProQuest LLC (2014). Copyright in the Dissertation held by the Author. Microform Edition © ProQuest LLC. All rights reserved. This work is protected against unauthorized copying under Title 17, United States Code

ProQuest LLC. 789 East Eisenhower Parkway P.O. Box 1346 Ann Arbor, MI 48106 - 1346

ii

TABLE OF CONTENTS

Page ! LIST OF TABLES ............................................................................................................. vi LIST OF FIGURES .......................................................................................................... vii ACRONYMS

.............................................................................................................. viii

ABSTRACT

................................................................................................................ ix

CHAPTER 1! INTRODUCTION ..................................................................................x! 1.1! Introduction ............................................................................................... 1! 1.2!

Introduction to dissertation ....................................................................... 1!

1.3!

Statement of Purpose................................................................................. 4!

1.4!

Research Question ..................................................................................... 4!

1.5!

Scope ......................................................................................................... 4!

1.6!

Significance ............................................................................................... 5!

1.7!

Assumptions .............................................................................................. 6!

1.8!

Limitations ................................................................................................ 6!

1.9!

Delimitations ............................................................................................. 6!

1.10!

Objectives .................................................................................................. 7!

1.11!

Project deliverables .................................................................................. 7!

1.12!

Intellectual challenge ................................................................................ 8!

CHAPTER 2! LITERATURE REVIEW .......................................................................9! 2.1! Introduction ............................................................................................... 9! 2.2!

Background ............................................................................................... 9!

2.3!

Measuring data center efficiency ............................................................ 12!

2.4!

Building ................................................................................................... 13!

2.5!

Data center equipment/components ........................................................ 17!

iii Page 2.5.1! UPS ....................................................................................................17! 2.5.2! PDU ...................................................................................................18! 2.5.3! Power Supply .....................................................................................18! 2.5.4! Storage ...............................................................................................19! 2.5.5! Server: Superfluous Components ......................................................20! 2.5.6! Firmware: Advanced Configuration and Power Interface (ACPI) ....21! 2.6!

Software .................................................................................................. 22! 2.6.1! Virtualization .....................................................................................23! 2.6.2! Job scheduling ...................................................................................25! 2.6.3! Power Management ...........................................................................25! 2.6.4! Operating System ..............................................................................26! 2.6.5! De-duplication ...................................................................................28! 2.6.6! Policies...............................................................................................28!

2.7!

Conclusion............................................................................................... 30!

CHAPTER 3! RESEARCH METHOD .......................................................................31! 3.1! Introduction ............................................................................................. 31! 3.2!

Method .................................................................................................... 31! 3.2.1! Case Studies .......................................................................................31! 3.2.2! Experiment.........................................................................................32!

3.3!

Researcher Bias ....................................................................................... 35!

3.4!

Exploratory Sample ................................................................................. 35!

3.5!

Equipment ............................................................................................... 36!

3.6!

Conclusion............................................................................................... 38!

CHAPTER 4! DATA COLLECTED...........................................................................39! 4.1! Introduction ............................................................................................. 39! 4.1!

Server ...................................................................................................... 39!

4.2!

Server Operating Systems ....................................................................... 41!

4.3!

Preliminary data ...................................................................................... 43!

4.4!

Conclusion............................................................................................... 44!

CHAPTER 5!

DATA ANALYSIS ..............................................................................46!

iv Page 5.1!

Introduction ............................................................................................. 46!

5.2!

Operating system data: GUI vs. Non-GUI .............................................. 46!

5.3!

Preliminary data ...................................................................................... 48!

5.4!

Conclusion............................................................................................... 49!

CHAPTER 6! CASE STUDIES: INNOVATIVE DATA CENTERS ........................51! 6.1! Introduction ............................................................................................. 51! 6.2!

Otherworld computing ............................................................................ 51! 6.2.1! Building .............................................................................................51! 6.2.2! Power .................................................................................................52! 6.2.3! Hardware ...........................................................................................53!

6.3!

Google data centers ................................................................................. 54! 6.3.1! Building .............................................................................................54! 6.3.2! Power distribution..............................................................................55! 6.3.3! Servers ...............................................................................................55!

6.4!

Microsoft Dublin data center .................................................................. 56! 6.4.1! Building .............................................................................................57!

6.5!

Facebook Prineville data center .............................................................. 60! 6.5.1! Building .............................................................................................60! 6.5.2! Data center equipment .......................................................................61!

6.6!

Discussion/findings ................................................................................. 63!

6.7!

Conclusion............................................................................................... 65!

CHAPTER 7! THE WAY FORWARD .......................................................................67! 7.1! Introduction ............................................................................................. 67! 7.2!

Educating for data center efficiency ....................................................... 67!

7.3!

Introduction to grid independent inspiration ........................................... 69!

7.4!

Net-Zero data center framework ............................................................. 70! 7.4.1! Location .............................................................................................70! 7.4.2! Building .............................................................................................72! 7.4.3! UPS ....................................................................................................73! 7.4.4! PDU ...................................................................................................73!

v Page 7.4.5

Power Supply .....................................................................................74

7.4.6

Server .................................................................................................74

7.4.7

Storage ...............................................................................................75

7.4.8

Software and Policy ...........................................................................76

7.5 CHAPTER 8 8.1

Conclusion............................................................................................... 76 CONCLUSION ....................................................................................77 Introduction ............................................................................................. 77

8.2

Research Overview ................................................................................. 77

8.3

Experimentation, Evaluation and Limitation .......................................... 78

8.4

Deliverables of Dissertation .................................................................... 79

8.5

8.4. Contributions to the Body of Knowledge ........................................ 79

8.6

Future Work and Research ...................................................................... 81

8.7

Conclusion............................................................................................... 82

LIST OF REFERENCES ...................................................................................................83 APPENDICES Appendix A ........................................................................................................... 98 Appendix B ......................................................................................................... 103 VITA

............................................................................................................. 104

vi

LIST OF TABLES

Table

Page

2.1 CEEDA Assessment Performance Areas (BCS) .........................................................15 2.2 LEED Certification Levels ..........................................................................................16 3.1 Configuration of Dell PowerEdge 1950 Servers .........................................................36 3.2 Shuttle XS35v2 Barebones Mini PC ...........................................................................36 4.1 Energy consumption by operating system ...................................................................42 4.2 Energy consumption by operating system ...................................................................44 5.1 OS: Windows 2008 R2 Datacenter (GUI) ...................................................................47 5.2 OS: Windows 2008 R2 Datacenter Core (Non-GUI) ..................................................47 6.1 Aspects Of Energy Efficiency Matrix .........................................................................64

vii

LIST OF FIGURES

Figure

Page

1.1 World Energy Consumption ..........................................................................................3 2.1 Watts lost per server ....................................................................................................10 2.2 The Cascade Effect ......................................................................................................12 2.3 Free-cooling calculato .................................................................................................14 2.4 Virtualization ..............................................................................................................23 3.1 Sample top output to file..............................................................................................33 3.2 Sample Typeperf output to file ....................................................................................34 3.3 Custom Micro Server used in testing...........................................................................37 3.4 Watts up? Pro ..............................................................................................................38 4.1 Baseline Watt Consumption of Barebones Server.......................................................39 4.2 Server energy consumption after the addition of 4GB RAM ......................................40 4.3 Server energy consumption after Solid State Drive installation ..................................41 4.4 Comparison of watts consumed by operating system..................................................42 4.5 Data collected during preliminary testing ...................................................................43 5.1 Windows 2008 versions vs. baseline hardware power consumption ..........................48 6.4 Power Usage Effectiveness Comparisons: PUE ..........................................................63 7.1 U.S. Free cooling and installed wind power map ........................................................71 7.2 Proposed active/active cluster configuration ...............................................................72 7.3 Logical request-storage access flow ............................................................................75

viii

ACRONYMS

ACPI

Advanced Configuration and Power Interface

BIOS

Basic Input/Output System

BTU

British Thermal Units

CRAC

Computer room air conditioning

EMS

Energy management system

EPA

Environmental Protection Agency

FAWN

Fast Array of Wimpy Nodes

GUI

Graphical User Interface

HDD

Hard Disk Drive

HVAC

Heating, ventilation, and air conditioning

I/O

Input/output

ICS

Industrial control systems

Kwh

Kilowatt hour

LEED

Leadership in Energy and Environmental Design

MWh

Megawatt hour

OS

Operating System

OSPM

Operating System Power Management

ix PC

Personal Computer

PDU

Power Distribution Unit

PUE

Power Usage Effectiveness

RAM

Random Access Memory

SCADA

Supervisory control and data acquisition

SSD

Solid State Drive

USB

Universal Serial Bus

VM

Virtual Machine

x

ABSTRACT

Brotherton, Heather Ph.D. Purdue University, August 2014. Data Center Energy Efficiency. Major Professor: J. Eric Dietz. Data centers are estimated to consume at least one percent of the world’s energy. This work discusses why data centers consume such a large amount of energy. Methods to reduce energy consumption in data centers are presented. Techniques to improve future data centers and the knowledge base for information technologists is discussed and presented. The feasibility of operating high availability, grid independent data centers from solely renewable energy sources is explored. Findings derived from experimentation and research indicate that creation of a data center designed to be operated primarily from intermittent renewable energy resources is feasible. Through implementation of the findings of this dissertation and extending research; many efficiency gains are possible that would allow current technology to be used to create a grid independent data center prototype. This dissertation concludes that the best way to increase energy efficiency is through education in the principals of data center energy efficiency. Sufficient information is available to reduce data center energy consumption by nearly half. The GUI based OS is shown to have an overhead of as much as 119.28 watts per server. Geoclustered containerized data centers designed to exploit free-cooling containing non-GUI

xi OS servers co-located with wind turbines are the recommended implementation of a grid independent data center.

1

CHAPTER 1 INTRODUCTION

1.1

Introduction

This chapter presents the introduction to the dissertation in Section 1.2. The statement of purpose is presented in Section 1.3. The research question is stated in Section 1.4 followed by the scope of the dissertation in section 1.5. This dissertation’s significance is presented in Section 1.6. Section 1.7 provides the background. Assumptions are provided in section 1.8 followed by section 1.9 limitations and 1.10 delimitations. Project aim and objectives are presented in Section 1.11. This section includes project deliverables and intellectual challenges. The overview of the dissertation's organization is presented in section 1.12. 1.2

Introduction to dissertation

Limited fossil fuel resources are a fact of our existence. These resources become increasingly scarce as the years pass and mankind continues to deplete our non-renewable resources such as fossil fuels. Many of the solutions proposed to address our dependencies on non-renewable resources are technology based and most of the technologies that would be employed are dependent upon information technology. The research and development of these solutions are also dependent upon information technology.

2 Increasingly, information technology is being adopted into the fiber of each of our daily lives. Some information technologies are obvious such as cell phones, email, social media, and banking. Others are embedded into the infrastructure such as controlling and monitoring systems that mange the power grid. Information technology is rapidly becoming a major contributing factor to our unsustainable consumption of non-renewable energy sources. Information technology is also generating landfill waste; some of which is toxic. Information technology, information systems, and supporting data centers are necessary evils of our modern landscape. However, information technology also presents many possible solutions to help us manage our resources more intelligently and perhaps more importantly automatically. Smart home technology is a good example of technology use to automatically and intelligently manage household resource or energy use. There are many new tools that can be installed into homes and businesses. A commonly used one is a programmable thermostat. Not only does this allow one to vary temperature based on household habits, but also the thermostats are increasingly tied to the “cloud.” Allowing one to make immediate changes remotely, for example adjusting the temperature when you find you are going home earlier than your set routine. Again, there is a hidden cost, the “cloud.” This seemingly ephemeral word is often used to describe Internet based services. The supporting infrastructure is hidden from the end user; in this case our homeowner. Internet based services are anything but the ephemeral facade the world sees. Most Internet services are supported by data centers and highly skilled workers who make themselves available around the clock to maintain the effortless appearance. Data centers consume an estimated 1.1-1.5 percent of

3

Figure 1.1 World Energy Consumption (Central Intelligence Agency, n.d.) the world’s energy. The World Energy Consumption graphic gives a sense of scale; one percent is equivalent to the energy consumption of Sweden. Reduction of data center energy consumption is the focus of this work. More specifically this work will discuss why data centers consume such a large amount of energy. Methods to reduce energy consumption in data centers will be presented. Techniques to improve future data centers and the knowledge base for information technologists will be discussed and presented. Finally, a theoretical framework for net-zero energy consumption, grid independent, high availability data center will also be presented.

4 1.3

Statement of Purpose

The purpose of this work is to create a framework for a net-zero energy, grid independent, and high availability data center. 1.4

Research Question

Is it feasible to operate a high availability, and grid independent data center from solely renewable energy sources? 1.5

Scope

The scope of this research will focus on designing a logical framework for a data center that can be reliably operated solely from intermittent renewable energy sources. Included in the scope of the project is educating for datacenter energy efficiency. The primary difficulty in achieving this is that technology is in a state of constant change. For example, a possible guideline could be that operating systems without graphical interfaces are more efficient and therefore would be recommended whenever possible. It is also possible that there is no substantial difference in power usage between an operating system with a graphical interface versus one without. This work will not recommend a specific operating system such as Windows Server 2008 R2, because this sort of recommendation will quickly become out dated. Creating energy benchmarks for all software is impractical for this dissertation. Benchmarking operating systems is far more accessible for this dissertation. Operating system benchmarks are far more useful to the community at large. Very few information systems run without an operating system, therefore any energy savings revealed between operating systems could have large-scale effects, if more operating systems that are efficient are chosen.

5 1.6

Significance

Use of fossil fuels for power has been a concern for decades. Increased dependence upon computer information systems has resulted in the proliferation of highenergy demand data centers. It is obvious that data centers are not going away. The good news is that computing may actually alleviate some resource issues. For example, Google states that the average search uses 0.0003 kWh power (Google, n.d.). This considered, using Internet searches to research and purchase products could be a major boon to the environment by reducing the carbon footprint of the purchase in comparison to more traditional methods of driving around to find, compare, and ultimately purchase products. In addition, computer information systems and computerized controllers are key components of energy conservation. Energy costs are now higher than the equipment costs in some data centers. Data centers are typically located in areas where the cost of electricity is higher than the national average, which exacerbates the problem of high-energy cost (Fan, Weber, & Barroso, 2007). According to Google " if all data centers operated at the same efficiency as ours, the U.S. alone would save enough electricity to power every household within the city limits of Atlanta, Los Angeles, Chicago, and Washington, D.C." (Google, n.d., p. ¶ 4). Reducing data center energy usage has proven to be a worthwhile pursuit with financial and environmental benefits. Data centers have evolved over the years as computing equipment evolved. Servers have become smaller and more powerful over the years. Increases in computing power often result in more heat production, increasing cooling requirements. Increased

6 demand for computing services has resulted in data centers that are strained for space despite the fact that servers have become smaller. 1.7

Assumptions

Assumptions for this dissertation include: •

Case study and document analysis is appropriate to explore the topic of interest.



Publicly available documents will provide sufficient evidence of data center efficiency best practices.



Data provided in documents are reliable and valid.

1.8

Limitations

The ability to create primary research data is limited due to: •

It is not feasible to create an entire data center given the time and budget constraints of this dissertation.



Access to server, software, and energy monitoring equipment is limited.



Data centers are highly complex and require many skilled professionals with expertise in many fields. Therefore, the depth into which this dissertation will delve will be based on findings.



There maybe details revealed during the course of the dissertation that I may not reveal due to non-disclosure agreements. 1.9



Delimitations

The number of case studies will be limited to ensure depth of research for each study

7 •

Due to proprietary interests, there may be lack of detail in some areas of the case studies.



This exploratory study will not attempt to create any new product, only review what exists and make suggestions for more efficient use and possible future improvements. 1.10 Objectives The aim of this research is to reduce data center non-renewable resource power

consumption. Creating a high-level net-zero data center design framework to convey the data center energy efficiency body of knowledge will help accomplish this goal. The goal of this work is to educate information technologists and to share with the existing computing community. This dissertation will endeavor to provide practical knowledge for data center planners or managers to make decisions that will result in an energy efficient data center. 1.11 Project deliverables •

Evidence supporting or refuting a relationship between operating system graphical user interface and power use



Grid independent data center framework



Case Studies



Suggestions for future improvements

8 1.12 Intellectual challenge One of the major challenges presented by the use of grid independent renewable energy is that sources such as wind and solar are intermittent in nature. Currently this is not a good match for the high-energy consumption of a high availability data center. Therefore, it will be necessary to research and combine techniques, which appear to be useful in creating such a design. An important aspect of this is reducing data center energy requirements. It will be important to determine if a number of individually insignificant changes when combined result in significant power savings. Finding this information in existing literature may be difficult and there will be some factors that are infeasible to test in this dissertation due to resource limitations. For example, it will not be possible to design and configure both the data center and the servers to test theories that emerge during the course of this dissertation. However, information that may indicate significant affects on power consumption can be recorded for future analysis. Another challenge is establishing an easy to comprehend framework for a very complex system. This is necessary to provide easy to follow guidelines to be used in data center planning. Many factors affect data center efficiency, pulling these best practices together into a neat package for distribution that will be useful for years to come will be challenging. Major factors must first be identified and analyzed to include into best practice materials.

9

CHAPTER 2 LITERATURE REVIEW

2.1

Introduction

This chapter reviews the current literature relating to data center energy efficiency. Topics covered include: the data center building structure, data center components, and software. Basic concepts such as measurement of data center efficiency are introduced in this literature review. The topics covered provide an overview of the elements that comprise the current state of data center energy efficiency. 2.2

Background

One of the major problems facing the information technology industry is the pervasive one-size fits all server. This is akin to everyone buying an SUV. It is not the most energy efficient, but it will do most anything you need. The one-size fits all servers are generally very powerful and capable of performing a multitude of functions, but why pay for components you do not need? Just as important why generate waste heat and wasted electricity for components you do not use?

10 The use of the one-size fits all status quo emerged for a number of reasons. One is that vendors offer these servers at discounted rates via mass production quantity discounts. These mass produced servers are typically powerful enough to run a variety of resource intensive applications. Building customized stripped down servers has not been feasible from a cost effectiveness standpoint. In addition, IT professionals charged with providing server specifications typically do not consider the energy efficiency cost of

Wa#s%Lost%per%Server%

Chipset,!32! Memory,! 27!

AC/DC!Losses,!131! Processors!86!

DC/DC!Losses,! 32!

PCI!Cards,!41!

Fans,!32! Drives,!72!

Figure 2.1 Watts lost per server Source: EXP Critical Facilities Inc., Intel Corp.

11 running the server over time. Another problem is that servers are reused for differing functions through their life. This can be a strong argument for simply buying the most powerful server that the budget allows. However, servers waste energy in ways that can be controlled through customization. The Figure 2.1 entitled Watts lost per server provides a rough breakdown of the number of watts lost broken down by server component. Imagine how many watts could be saved simply by building servers with only the components that are required; For example PCI cards, these would include video and audio cards, if unnecessary PCI cards were not installed into servers 41 watts could be saved based on the information in Figure 2.1 Watts lost per server. Servers rarely need such cards to perform their functions. This would be a very easy place to begin reducing energy consumption. If unnecessary on-board video and sound components were removed this could also result in energy savings at the server level. These components waste energy even when they are not used. When these savings are multiplied into the average numbers of servers in a data center the savings become very significant. According to the principals displayed in The Cascade Effect, these savings at the server level would result in even larger savings at the data center level. For example, if a PCI card, such as video card, was removed for a savings of 41watts from 500 servers in a data center, the cumulative watts saved would be 58220 watts per year. At an average of ten cents per kilowatt-hour, this results in a savings of $51,035.65 per year. Watts used by servers are converted to heat, which is expressed in British Thermal Units. Each watt consumed by a server translates into approximately 3.4129 BTUs per hour (Barielle, 2011). This is the reasoning behind the

12 cascade effect. This is also the reason that the primary focus of this dissertation will be on reducing the watts consumed at the server level.

Figure 2.2 The Cascade Effect Source: Emerson

Also large web scale computing providers, such as Facebook and Google, are designing their own servers that have only the components essential to provide the processing required for their own business needs. 2.3

Measuring data center efficiency

The most widely accepted measure of data center efficiency is Power Usage Effectiveness, more commonly referred to as PUE. PUE attempts to measure how much of the energy consumed by the data center is being used for processing (Rawson, Pfleuger, & Cader, 2008). The ideal PUE is one; a PUE of one represents 100 percent efficiency. This would mean that all energy consumed in the data center was used for processing.

13

Equation 2.1

Power usage effectiveness

!"# =

!"#$%!!"#$% !"#$%!!"!!"#$%&!"#$!!"#

%$The average data center PUE reported by the Uptime Institute in 2011 is 1.8 representing about 20 percent efficiency. This represents an improvement, as in 2009 the Energy Star program reported an average PUE of 1.91. This means that for every watt used for computing processes nearly another was used as overhead. According to a 2006 Gartner press release, “traditional datacenters typically waste more than 60% of the energy they use to cool equipment (The Green Grid, n.d., p. 2).” As bad as 20 percent sounds, it is important to remember that while PUE is widely accepted there is still debate about the accuracy of PUE as a measure. Emerson criticizes the measure in a 2007 dissertation called Energy Logic (Emerson, n.d.). It further argues that the increase in compute power should be taken into consideration. Emerson claims from 2002 to 2007 “data center compute output increased fourteen-fold” while the energy consumption doubled (Emerson, n.d., p. 1). 2.4

Building

The foundation of any data center is the building in which it is housed. Data centers are often collocated with IT staff. In some cases, the data center will be housed in a more general office building. There are many options for location and therefore housing the data center. In the past, data centers have been squeezed into existing buildings requiring retrofitting to accommodate the various unique needs of a data center.

14 The electrical and HVAC requirements as well as space requirements have grown as technology has matured. Servers have become smaller, but due to expanded use of data center resources, server density has increased. As a result, according to Marty Poniatowski, the average data center is about 30% efficient largely due to the difficulties of keeping the data center, cool. (Poniatowski, 2010) Ensuring the servers are adequately cooled is important because overheated servers will run slower and there is the possibility of damage to the hardware due to excess heat. The current recommendation is build to take advantage of free cooling. Free cooling is appropriate in 68-77degrees Fahrenheit temperature ranges with a relative humidity of less than 60 percent. The Green Grid has provided a calculator, as seen in Figure 4, to estimate savings using free cooling (Spangler, 2010). American Society of Heating, Refrigerating, and Air-Conditioning Engineers recommends, under specified guidelines, temperatures above 80 degrees Fahrenheit.

Figure 2.3 Free-cooling calculator Source: Green Grid http://cooling.thegreengrid.org/calc_index.html

15 Free cooling uses economizers rather than chillers (Spangler, 2010). Free cooling saves both energy and water. New data centers are being built to best take advantage of free cooling. Table 2.1 Ceeda Assessment Performance Areas (BCS, n.d.) Data Centre Utilization IT Equipment and Services Cooling Data Centre Power Equipment Data Centre Building Monitoring

Three Levels: Gold * Silver Bronze *Requires PUE 1.5 or less

One example of this is the Lockport, New York Yahoo data center. This data center, shaped like a chicken coop, uses only “1 percent of its annual energy consumption to cooling.” (Fehrenbacher, 2010, p. ¶ 3). This design is also faster to build and less expensive than traditional data centers. The PUE for this data center is 1.08; a substantial improvement over the EPA reported average of 1.92 (Fehrenbacher, 2010, p. ¶ 6). Microsoft’s Dublin data center is taking advantage of free cooling thanks to low ambient temperatures in Ireland. This data center features a modular design to allow for future expansion without compromising the current design. There are other approaches as well; a London data center is designed to export excess heat to neighbors through a district heat network (Miller R. , 2009). The PUE for this data center was not included in the report. Presumably, this approach represents the use of otherwise traditional methods and equipment. Another approach is to locate a data center underground. One example of this is the Iron Mountain facility located in an old mine in Pennsylvania. The limestone walls absorb the heat “eliminat(ing) the need for typical data center cooling systems” (Miller R. , 2010, p. ¶ 3).

16 Certifications for “green” data centers are still in the developmental phase. Energy Star has created a newer standard more specific to data centers. They have added a data center assessment to their online Portfolio Manager tools. This efficiency certification system is based on a 100-point scale. “Those that score 75 or higher can request an audit from the EPA, which then awards the Energy Star certification.” (Niccolai, 2010, p. ¶ 3). This efficiency measurement takes into account accepted data center metrics such as PUE. There are various LEED rating systems such as those for new construction, existing buildings, retail, and homes. The rating system’s 100-point scale, shown on Table 2.2, includes sustainable site selection, water efficiency, energy and atmosphere, materials and resources, indoor environmental quality, locations and linkages, awareness and education, innovation in design and regional priority (U.S. Green Building Council, n.d.). Table 2.2 LEED Certification Levels Certified Silver Gold Platinum Score out of 100

40–49 50–59 60–79 80 +

The British Computer Society (BCS), a non-profit organization dedicated to improving information technology worldwide, developed the Certified Energy Efficient Award (CEEDA). This award certifies that data centers are following the European Union Code of Conduct for Data Centers best practice originally published in 2008 (BCS, n.d.). This award is internationally recognized and has three levels: bronze, silver, and gold. The Gold Award is the highest level of this award and is the only level with a PUE requirement. To be awarded the CEEDA Gold a data center must have a PUE of less

17 than 1.5 during the previous 12 months (BCS, n.d.). CEEDA assessments measure performance areas shown in Table 2.1. 2.5

Data center equipment/components

2.5.1 UPS In 2009, Google presented custom server modifications to the public. A 12-volt battery was built into the server tray to act as an Uninterruptable Power Supply, (UPS) (Shankland, 2009). This modification, enabled by the custom power supply, provides six minutes of power (Jennings, 2011). Google has been using this model since 2005 because it is financially advantageous. Savings are the result of increased efficiency and it is less expensive than a centralized UPS (Shankland, 2009). Facebook has followed Google’s example and created their own rack level UPS that shares some similarities with the Google UPS. The Facebook UPS is integrated with the power supply (Park, 2011). The Facebook UPS provides about 45 seconds of power for servers operating at full capacity (Sarti, 2011). The batteries used by Facebook are configured into a string of four 12V batteries and have an expected life span of 10 years (Sarti, 2011). Flywheels are an alternative to more traditional battery units. The efficiency of a flywheel or rotary UPS is about 97% (Barroso & Hölzle, 2009). Flywheels require less maintenance and are arguably more sustainable due to the fact they do not use lead battery acid. This type of UPS also does not require the ventilation and cooling equipment required by more fragile battery based units (National Renewable Energy Laboratory, 2011).

18 A new possibility presented at Google’s Efficient Data Center Summit 2009 by James Hamilton is to not use generators and UPSs (Hamilton, 2009). In this configuration, networking software would be used to route around any non-responsive servers. Otherwise, site failovers would be used to respond to widespread outages (Hamilton, 2009). This would be feasible only for very large organizations that have several data centers (Miller R. , 2010). However, those that do have the possibility of using the no UPS no generator option could certainly save on wasted capital and lifecycle expenses. Yahoo is reportedly giving serious consideration to leaving out UPSs and generators in some future data center projects (Miller R. , 2010). 2.5.2 PDU Power Distribution Units (PDU) convert incoming power from the UPS down and distribute it throughout the data center. The power is generally converted from 480V to 120V or 277V (Martin, 2011). This power is then distributed via electrical outlets to the power supplies. PDUs can provide monitoring as well as distribution. PDUs can also be combined with outlet switching and power management software. These types of PDU configurations allow data centers to measure and address power usage (Energy Star, 2011). It is important to remember that, while PDUs can provide information and control of power usage, they primarily convert energy. Energy conversions result in energy loss as heat. Outside of the United States PDUs are not required (Pratt, Kumar, & Aldridge). 2.5.3 Power Supply In 2006, Google released a white paper on high efficiency power supplies. The document presented information on losses due to unnecessary power conversions that

19 resulted in heat generation. At the time this document was released the average power supply was 60 to 70% efficient (Hoelzle & Weihl, 2006). At that time, Google employees had developed custom power supplies that were 90% or more efficient. Google presented this technology to Intel in effort to reduce power consumption in both desktop and servers in the future. This was accomplished by simplifying the power supply to only output 12 volts, rather than the multiple outputs provided by standard power supplies (Jennings, 2011). The inefficiencies related to power supplies have been improved largely thanks to the efforts of Google. Google partnered with Energy Star to develop standards for Energy Star certified servers, which took effect in May 2009 (The Casmus Group, Inc., 2010). This power supply portion of the program was so successful that it is being discontinued in part because the required level of efficiency is now industry standard. In addition, there is now a federally mandatory minimum efficiency requirement (Biancomano, 2010). 2.5.4 Storage Hard drives are used to store data long term. The traditional hard drive is a magnetic disk containing spinning disks. These drives are a significant improvement over the magnetic tape based storage of yesteryear often used today for archival storage. Tape storage is typically slow due to linear access but is inexpensive. Magnetic disk hard drives can store and access data in a non-linear fashion resulting in a performance boost. This storage can be further optimized by storing related data in adjacent sections of the

20 disk to avoid performance reductions due to the movement of physical components involved in reading many locations of the disk. Unfortunately, hard disk drives require a large amount of energy at start up to begin spinning the disk. Upon activation, a hard disk drive will consume 80% of its maximum utilization power usage (Tsirogiannis, Harizopoulos, & Shah, 2010). This results in wasted energy when utilization is less than 80%. Utilization is defined in this instance as read/write activity. Solid-state drives may emerge as a cost effective alternative to the widely used magnetic disk drive. The solid-state drives have no moving parts and are not subject to mechanical failures. Solid state drives do not require the large boost of energy at start up and therefore have a linear power usage increase in proportion to the load (Tsirogiannis, Harizopoulos, & Shah, 2010). Currently it may be better to pursue options to increase energy efficiency of storage. One option is storage consolidation. Virtualization is one possible method of achieving this goal and shared storage arrays are another possibility. Increased use of technologies such as memcached for frequently accessed data may also be helpful to reduce wasted energy due to suboptimal hard disk drive utilization. The memcached technique uses RAM caching, this used in conjunction with compression can be very useful for transactional loads. 2.5.5 Server: Superfluous Components Servers are used differently than personal computers. They are the workhorse class of computers generally designed to process large loads. While the prevalence of the GUI has moved computers into the main stream, there is a downside to GUIs. They are

21 resource intensive in comparison with command line and require additional hardware, a graphics card. For server administration, there is little need for a GUI or the overhead involved with processing graphics or graphics hardware. For this reason, Google servers do not use a GUI or contain a graphics card (Google, n.d.). Google has also stripped away excess USB, parallel port, and other superfluous components to create an elegant workhorse motherboard (Jennings, 2011). 2.5.6 Firmware: Advanced Configuration and Power Interface (ACPI) The purpose of ACPI is to enable Operating System Power Management (OSPM) to be developed and employed. ACPI is a voluntary industry standard for hardware that allows operating systems to develop independently of hardware or BIOS. The most recent version is Advanced Configuration and Power Interface Specification revision 5.0 released December 6th 2011 (Hewlett-Packard Corporation, Intel Corporation, Microsoft Corporation, Phoenix Technologies Ltd.,Toshiba Corporation , 2011). The compatibility created via the ACPI specification enables operating systems to manage complex power management features. The goal of the power management features is to reduce power consumption though management of the hardware components. The ACPI provides power management states for processors, devices, and thermal management. System wide sleep and wake states are also provided through the ACPI. Power and related data collection are also enabled through this specification. The various levels of control allow a computer to be managed in granular manner that fits the machine usage to best conserve energy. Data collected can allow power management

22 features to be optimized for the computer’s unique function. The implications for servers are addressed in the ACPI 5.0 Specification. (S)erver machines often get the largest absolute power savings. Why? Because they have the largest hardware configurations and because it’s not practical for somebody to hit the off switch when they leave at night. (HewlettPackard Corporation, Intel Corporation, Microsoft Corporation, Phoenix Technologies Ltd.,Toshiba Corporation , 2011, p. 35) The take away is that having ACPI compliant hardware and operating systems that are designed to take advantage of the power management capabilities provided via ACPI means energy savings. These savings implemented on a large scale in a data center have the potential to have a major positive impact on the bottom line. 2.6

Software

There is much dispute concerning what operating systems cost the most to operate. The matter of interest in this dissertation is not training cost or the cost of more highly skilled administrators, but which server operating systems use the least power. There is no conclusive information on this aspect of server power consumption. In fact, there is very little literature on the power consumption of software. This is perhaps because there does not appear to be a direct relationship. However, software can have a large impact on power consumption based on the program’s resource management. This will be an area of research in this exploratory investigation.

23 2.6.1

Virtualization

Virtualization is a software solution to address hardware related efficiency problems. Virtualization software platforms allow several servers, known as guests, to be consolidated into one powerful server. The guest servers share a pool of system resources such as CPU, RAM, and storage. This consolidation can save on hardware costs, floor space, and power. Virtualization has become a predominant method of cutting data center costs. Virtualization allows many services to be hosted on virtual servers to share the resources of very powerful servers. However, there is still some resistance to virtualization from server administrators who feel their applications are too resource intensive to run in a shared environment. These concerns are well founded. Not all services are good candidates for virtualization. Applications that run over 80% CPU usage may overtax shared environments. However, this can be compensated for through careful resource planning and monitoring. Ensuring that there is enough resources available to the virtual server

Figure 2.4 Virtualization Source: MJM Networks Pte Ltd. http://www.mjm.com.sg/img/virtual_machines.jpg

24 housing a given application are quite possible, but it does take a carefully planned, monitored, and maintained environment. This translates into a requirement for highly skilled administrators for the virtual server environment. There are a few important items to consider when planning virtualization. The first is identifying virtualization candidates. Underutilized servers tend to be the best candidates for virtualization in that the most power savings will come from consolidating these servers. Watch out for over allocation of virtual machine (VM) resources. This typically happens when heavily used servers are consolidated. The result is unmet resource demand meaning sluggishness in high profile services or even server crashes. Unfortunately, if the resources are not managed properly the server hosting the VM’s could crash impacting many services. This does not mean that high use or high profile services cannot be virtualized, simply that historical server resource utilization should be carefully reviewed before consolidation to avoid consolidating a mix of servers that would result in over allocation. The next important consideration is changes in the data center floor and rack space utilization as these changes could require changes in the HVAC system. Virtualization often results in increased data center density and reduced floor space usage. These changes can have a negative impact on the efficiency of the cooling system. Due to the high power demands of cooling the data center, this must be carefully monitored and planned for in order to avoid losing virtualization savings to inefficient cooling costs.

25 2.6.2 Job scheduling One framework for job scheduling is presented in “Energy-aware Scheduling in Virtualized Datacenters.” This method dynamically consolidates workloads in virtualized server clusters. The end result is a smaller number of servers processing workloads. Powering off spare servers using this technique resulted in a 15 percent power savings (Goiri, et al., 2010). Data center architectures that include low power and high performance systems are optimized in this document. The article asserts this model can be used to optimize power usage in distributed locations (Goiri, et al., 2010). 2.6.3 Power Management James Hamilton of Amazon advocates over-allocation or over selling of data center power capacity. He likens the concept to airlines selling more tickets than available seats banking on cancelations (Hamilton, 2009). According to Hamilton, this technique is even more effective in IT because data centers rarely top out the available capacity and are less efficient at low capacity (Hamilton, 2009). This method can be useful for cloud providers. Some method of load shedding must be imposed as a safety net for the rare spikes to ensure reliability. One method presented is Power routing which relies on a scheduling algorithm that balances loads across an under-provisioned “Shuffled” PDU topology (Pelley, Meisner, Zandevakili, Wenisch, & Underwood, 2010). It is claimed that these “mechanisms reduce power infrastructure capital costs by 32% without performance degradation. With energy-proportional servers, savings reach 47%.” (Pelley, Meisner, Zandevakili, Wenisch, & Underwood, 2010, p. 2) “Shuffled” PDU topology is somewhat

26 complex and is not explained in this dissertation. This design provides a pre-set power cap to servers. If a server exceeds its cap, it requests more power. If the power is not available, a scheduling algorithm is then employed to create “slack” or throttle the server if no additional power can be found. The scheduler reserves enough power to prevent outages (Pelley, Meisner, Zandevakili, Wenisch, & Underwood, 2010). Another option called Blink is to manage the server active state (Sharma, Barker, Irwin, & Shenoy). Blink was tested using a low power server cluster. Blink uses specified policy to determine how often and which nodes are in the active state (Sharma, Barker, Irwin, & Shenoy). Blink also allows servers to perform effectively with intermittent power, which increases the feasibility of using renewable power sources (Sharma, Barker, Irwin, & Shenoy). Blink uses a custom version of memcached dubbed “BlinkCache” to reduce response latency, which could be exaggerated to reduce active states of the servers in the cluster. Essentially, Blink is a power capping technique that can be used to manage energy costs, reduce usage during brownouts, or work effectively with renewable energy sources. 2.6.4 Operating System As mentioned in 2.5.6 Firmware: Advanced Configuration and Power Interface (ACPI) operating systems have the potential to take advantage of the ACPI specifications. Hardware typically lags behind specifications, but new ACPI specifications are designed to be enabled on legacy hardware where possible. The Linux 3.3 Kernel has integrated ACPI 5.0 support according to a report dated January 17th, 2012 (Larabel, 12). This does not mean that it is safe to assume that your operating system is capable of implementing

27 the newest ACPI specifications. In reality, due to the open source nature of Linux it is likely to have the changes ready for a Kernel patch before a proprietary OS such as Windows. Does this make Linux better? Not necessary, the faster the changes are implemented the less time for testing. Lack of testing could translate into instability, which is not ideal for enterprise applications. This does not mean that Linux based operating systems are to be avoided, simply be cautious. It may not pay to be an early adopter on the bleeding edge. Using a slightly older version for a few months is advisable. In addition, it is always advisable to do your own testing in a development and quality assurance environment to assure stability. According to a report dated January 18th 2010, Windows Server R2, was using the ACPI 3.0 specification adopted September 2004 (Hewlett-Packard Corporation, Intel Corporation, Microsoft Corporation, Phoenix Technologies Ltd.,Toshiba Corporation , 2011) (De Gelas, Dynamic Power Management: A Quantitative Approach, 2010). ACPI 4.0 was adopted June 2009 (Hewlett-Packard Corporation, Intel Corporation, Microsoft Corporation, Phoenix Technologies Ltd.,Toshiba Corporation , 2011). This would indicate a minimum of a six-month lag for the Microsoft server operating system. This delay is interesting considering that Microsoft is listed as a major contributor to the ACPI specification. Which one would assume means would allow lead-time to test the upcoming specifications (HewlettPackard Corporation, Intel Corporation, Microsoft Corporation, Phoenix Technologies Ltd.,Toshiba Corporation , 2011).

28 2.6.5 De-duplication Data de-duplication may play a key role in an efficient data center. Storage is not cheap or more correctly the storage load is not cheap. Duplicate data causes backups to run longer thus take more processing power and electricity. Historically, there has been a trade off between the processing power required by de-duplication and reduction in “the storage footprint, pressure on the storage system, and network traffic.” (Beltra ̃o Costa, Al-Kiswany, Vigolvino Lopes, & Ripeanu, p. 1) According to “Assessing Data Deduplication Trade-offs from an Energy and Performance Perspective”, the trade off has increased in modern energy proportional computer systems (Beltra ̃o Costa, Al-Kiswany, Vigolvino Lopes, & Ripeanu). This is because legacy computer systems might use as much as 60 percent of its maximum energy draw idling. In newer systems, the usage could be 20 percent or lower. The increase in energy usage by adding the de-duplication processing to the idle power consumed is much greater proportionally in modern systems. Thus, newer computer systems must have more duplicate data to justify the energy load of de-duplication (Beltra ̃o Costa, Al-Kiswany, Vigolvino Lopes, & Ripeanu). Consequently, it is necessary to monitor when de-duplication should be performed. 2.6.6 Policies Policy is an important consideration in the greening of IT. One strategy is to make the customer aware of and responsible for the impact of resource consumption. This is a standard utility model. The objective is to provide a financial or other incentive for the data center customers to make behavioral changes that reduce their energy use or carbon impact, such as turning off unused servers, enabling power management functions, managing unnecessary storage, or virtualizing servers. The

29 opportunity to save energy and reduce carbon impacts in a typical data center by managing IT for energy efficiency is enormous, ranging from 10% to 80% reductions depending on the existing level of maturity and virtualization in the data center. (Rasmussen, n.d., p. 3) To make this successful this information must be collected and disseminated to customers. The best way to accomplish this is open to debate, as is the effectiveness of such a toll system. Policy may be set at many levels and monitoring systems are an important way to measure usage and collect information that drives future policy. One area where this model may be particularly effective is storage. According to the U.S. Department of Energy, “Power consumption is roughly linear to the number of storage modules used.” (National Renewable Energy Laboratory, 2011, p. 2) Storage capacity tends to grow out of control in the absence of data retention policies. Customers hesitate to remove old documents, websites, etc. for fear that they may be needed. Policy can also enable software control mechanisms to be used in the most efficient manner. There are software options that schedule batch jobs based on the availability of renewable energy sources. Many of these software solutions can also schedule jobs to run at the most cost effective time if the power must be drawn from the grid. However, there must be policy to support the use of this software. This would include prioritizing workloads and establishing service level agreements (SLA) that support scheduled batch mechanisms. This would not be appropriate for workloads that require real-time processing. Web services and data collection tend to need real-time processing. However, web page updates and data computation might be excellent candidates for batch processing.

30 2.7

Conclusion

Many technologies and techniques for energy efficient data center design were presented in the literature review and case studies. Advances have been made in the areas of data center housing, HVAC, servers, software, policy, resulting in significant improvements in PUE. However, the most effective method may be to focus on the servers themselves first. This is primarily due to the cumulative effects of watt savings at the server level. Another reason for this is that replacing servers with more efficient models is far more accessible than designing and building a data center suited to function. It also follows that less equipment would be required to manage servers that do not produce excess heat. This chapter has provided an overview of factors impacting data center efficiency. Measuring data center efficiency through the use of the PUE was presented. Elements such as the building housing the data center, data center equipment, and software were presented and discussed. It is apparent that leaning the data center through removing unnecessary components and choosing energy efficient software can result in cumulative improvements in data center efficiency. The next chapter will present the research method. Chapter 3 will present the research equipment and methodology employed in this dissertation.

31

CHAPTER 3 RESEARCH METHOD

3.1

Introduction

This chapter outlines the equipment and methodology used for this exploratory dissertation. This dissertation will use both qualitative as well as quantitative techniques. Innovative data centers will be used in multiple case study analysis. Triangulation was used in the multiple case studies presented in Chapter 6 analysis. Triangulation was used to identify overlapping elements contributing to data center energy efficiency. 3.2 3.2.1

Method Case Studies

Qualitative Case study analysis was employed to identify major factors affecting data center power consumption. Cases were reviewed to determine methods employed that created substantial gains in power savings. Multiple case study methodology was employed in this project using purposeful sampling techniques. Subjects chosen serve as examples of state of the art energy efficient data centers. The subjects were chosen largely based on reported PUE. Other factors will be taken onto consideration as well. Subjects were chosen based on factors indicated in the literature review.

32 These factors include: •

The structure housing the data center



Heating, ventilation, and air conditioning (HVAC)



Servers



Software



Policy



PUE Many of the subjects were chosen from Data Center Knowledge articles. Possible

subjects were reviewed to ensure enough data was available to contribute to the dissertation. Data collected were analyzed to determine data center efficiency best practices as well as possible areas for future improvement. 3.2.2

Experiment

Quantitative data were collected to measure the efficiency of server operating systems. Experimentation and observation were employed at the server and software levels. The data collected includes observations of watts consumed by the server with different hardware and operating system software configurations. All readings were collected for a minimum of one hour using the Watts Up? Meter described in Section 3.5 Equipment. The hardware used was as described in Section 3.5 Equipment. Software for the testing were the following x86 operating systems: •

Ubuntu 9.10 (Linux)



Ubuntu 11.10 (Linux)



Windows Server 2008 R2 Datacenter Core

33 •

Windows Server 2008 R2 Datacenter GUI



Windows Server 2003 GUI No operating system configurations were performed all were installed using the

defaults. The systems will not connected to the internet and no updates were performed on the operating systems. Linux based server operating systems ran the top command during the observations. Top provides data for on-going processes. A sample was taken not running Top to serve as a baseline so that the load of running top can be determined. Top was configured via command line to take readings at intervals of one second. The command used was: Type top -d 1 > /home/testOSName.txt

Figure 3.1 Sample top output to file The results were sent to a file for possible analysis. Unfortunately, the amount of data collected was very large and the time stamp provided by the metering software was not reliable enough to ensure accuracy in matching processes to energy consumption. Figure 3.1 Sample top output to file displays page one of the server process data collected by the Top program. The information provided by the top program is valuable and is

34 worthy of future use in energy consumption studies. More reliable time stamps could allow energy consumption to be pinpointed at the process or thread level. Correlating processes/threads with energy consumption would allow work to be done to mitigate or eliminate inefficient processes. The Windows based operating systems do not have a system level program that collects the same data as the Linux Top program. This dissertation utilized a command line tool called Typeperf that was configured to provide much of the same information. The command used was: typeperf "\Memory\Available bytes" "\processor(*)\% processor time" "\Process(*)\Thread Count" > testOSName.csv

Figure 3.2 Sample Typeperf output to file This tool was chosen in part because it would run with or without the standard GUI based Windows operating system. Another advantage to this tool is reduction in the possibility of creating the energy overhead that might come with a more sophisticated program. As this dissertation’s focus is on differences in operating systems, adding another program would unnecessarily complicate the readings. These readings as shown in Figure 7 Sample Typeperf output to file proved valuable during the data analysis phase

35 as the data gathered from the Typeperf tool provided insight into the differences in what was happening at the operating system level. 3.3

Researcher Bias

I present here my personal bias on this topic, to inform the reader of beliefs that may encroach upon the findings of this research. I believe that there are many advances that can be made in data center energy efficiency. I have followed this topic since 2011 and was lucky enough to attend the second Open Compute Project summit and have interacted with key project members since my attendance. I believe it is very possible to operate a data center in an energy efficient, cost effective and high availability manner. I also believe that it is our responsibility to use renewable resources to directly power datacenters as far as it is possible to do so and to continue to push the boundaries of possibilities to that end. 3.4

Exploratory Sample

Preliminary tests were performed to provide exploratory samples. The variable of interest was operating system energy consumption. Specifically, would there be any marked difference in power usage by operating system. The expected outcome was that an operating system that was not running a graphical user interface would use less energy. This intuitive expected outcome was based on the concept that a system that does not render graphics would run fewer processes. Observation was employed at the server level. Existing servers configured with different operating systems at a Purdue University Engineering Computing Network data center served as subjects for convenience. The servers have identical hardware

36 configurations. The service tag (ST) numbers listed in Table 3.1 provide the hardware configuration through Dell’s enterprise support site. Table 3.1 Configuration of Dell PowerEdge 1950 Servers Red Hat Linux 5.7 Windows 2003 R2, x64 SP2 N/A

Kernel 2.6.18.274.7.1.e15 BIOS 2.7.0 BIOS 2.7.0

3.5

ST: 45BVYD1 ST: D5BVYD1 ST: GR8VYD1

Equipment

Custom hardware was used for this experiment. The hardware was chosen for its availability, cost effectiveness, and efficiency. The efficiency of the hardware is important to ensure that the hardware does not unduly influence the results of the analysis. The hardware components used are as shown in the table entitled Shuttle XS35v2 Barebones Mini PC. Table 3.2 Shuttle XS35v2 Barebones Mini PC Intel Atom D525 1.8GHz dual core processor Integrated Intel Graphics Media Accelerator 3150 Gigabit LAN SD card reader 5 USB connections Fan-less external power supply Intel Solid State Drive 80GB 320 Series PNY 4GB PC3-10666 1.3GHz DDR3 SoDIMM

This system, as shown in Figure 3.3, eliminates as many components as possible to reduce overhead from the hardware. Use of more specialized components such as a motherboard without integrated video may produce a more ideal testing environment, but

37 are not readily available. Fortunately the elimination of fans, magnetic hard disk drive, and oversized components produced a surprisingly energy efficient test server. The energy-monitoring tool was also chosen for availability and cost effectiveness. The meter used was a Watts up? Pro universal outlet version as shown in Figure 3.4. This meter is capable of measuring 100 to 250v within a plus or minus 1.5 percent accuracy

Figure 3.3 Custom Micro Server used in testing and logging at one-second intervals and provides a USB interface and PC software. This logging interface eliminates transcription errors and saves time. The down side of this tool is that the time stamps are not always reliable and that is has a limited memory. Even when set to overwrite, the meter will often log less than one reading per second to save space. The information provided by the manufacturer does not provide adequate information to avoid this or estimate the readings accurately.

38

Figure 3.4 Watts up? Pro universal outlet meter 3.6

Conclusion

This chapter presented the research method for this dissertation. The equipment used during data collection as well as the methods employed to collect the data were also presented. Chapter 4 will present the data collected using the equipment and methods presented in this chapter.

39

CHAPTER 4 DATA COLLECTED

1.1

Introduction

Experimental data collected is presented in this chapter. The data includes readings before the RAM and solid-state drive were installed into the micro server used for testing. Several operating systems were installed and tested for energy consumption. The quantitative data resulting from these tests is presented in this chapter. 4.1

Server

The Figure 4.1 Baseline Watt Consumption of Barebones Server (see Table 3.2 Shuttle XS35v2 Barebones Mini PC) provides a scatter chart of the server’s energy consumption prior to adding the random access memory (RAM) and the solid-state drive (SSD). The mean energy consumed is 7.96 watts and the median is 8.70 watts. The Figure 4.2 Server energy consumption after the addition of 4GB RAM graph reflects the watts consumed by the server during a one hour period after the 4GB RAM are installed in the server. The mean energy consumed is 15.36 watts and the median is 15 watts. This represents an increase of 7.40 watts in the mean and 6.80 watts in the median. The Figure 4.3 Server energy consumption after Solid State Drive installation displays the watts consumed after the solid-state drive was installed. The figures produced by this

40 data represent the baseline for the server before the installation of the operating systems. The mean consumption is 17.42 watts and a median consumption of 17.7 watts. This is an increase in the average of 2.06 watts and median of 2.20 watts.

Figure 4.1 Baseline Watt Consumption of Barebones Server

Energy%Consump5on%a6er%the%addi5on%of%4GB%RAM%% 16! 14! 12!

WaGs!

10! 8! 6! 4! 2! 0!

12:02:53!PM!

12:03:36!PM!

12:04:19!PM!

12:05:02!PM!

12:05:46!PM!

12:06:29!PM!

12:07:12!PM!

Figure 4.2 Server energy consumption after the addition of 4GB RAM

12:07:55!PM!

41

Server%energy%consump/on%a1er%Solid%State%Drive%installa/on%% 10! 8! 6! 4!

WaGs!

2! 0! 11:00:14!AM!

11:00:58!AM!

11:01:41!AM!

11:02:24!AM!

11:03:07!AM!

11:03:50!AM!

11:04:34!AM!

11:05:17!AM!

11:06:00!AM!

H2!

. Figure 4.3 Server energy consumption after Solid State Drive installation 4.2

Server Operating Systems

The Figure 4.4 scatter chart displays the energy consumption of each operating system. The Energy consumption by operating system Table 4.1 Energy consumption by operating system breaks down the mean, median, minimum, and maximum watts consumed during the one-hour observation period for each installed operating system. The greatest overall variation appears to be the Windows 2008 R2 Datacenter graphical user interface operating system that varied from 15.30 watts to 24.90 watts with a difference of 9.60 watts.

42

Figure 4.4 Comparison of watts consumed by operating system Table 4.1 Energy consumption by operating system Operating System

Mean Watt consumption

Median Watt consumption

Watt Min

Watt Max

Ubuntu 9.10 Ubuntu 9.10 running Top Ubuntu 11.10 running Top Windows Server 2008 R2 Datacenter Core Windows Server 2008 R2 Datacenter GUI Windows Server 2003 GUI

17.65 17.56

17.5 17.60

16.90 16.90

22.80 17.80

Difference between Min and Max 5.90 .90

17.48

17.50

17.20

17.90

.70

17.57

17.40

15.80

22.00

6.20

18.85

18.60

15.30

24.90

9.60

18.12

17.30

16.50

24.20

7.70

The data collected provides some insight into the operating system-power consumption relationship. The data collected indicates a correlation between increased energy consumption and the presence of a graphical user interface. Figure 4.4

43 Comparison of watts consumed by operating system scatter chart and Table 4.1 both show that the operating systems that do not run a graphical user interface (GUI) use roughly 17.5 to 17.6 watts. The two graphical user interface (GUI) based operating systems tested consumed 18.1 to 18.9 watts roughly. It could be extrapolated that not using a GUI would save .6 to 1.3 watts per server. 4.3

Preliminary data

Preliminary tests were run in a pre-existing uncontrolled environment. These servers were scheduled for decommissioning; therefore, there should be no non-operating system activity on any of the servers. However, because these are data center servers that are still technically in production it was not possible to make unscheduled changes to the servers. The power consumption for these servers is expressed in kilowatts rather than watts. The reason for this is that the power consumption for these machines is so much

Figure 4.5 Data collected during preliminary testing

44 higher than the micro server created for the previously presented data. Using watts to display the data would not have allowed all the servers to appear in the same graph. Table 4.2 Energy consumption by operating system OS Mean kW consumption Red Hat 5.71 Bios (no OS) Windows Server 20032

27.59 28.67 23.38

1. Kernel 2.6.18.274.7.1.e15 2. R2 x64 SP2

The Figure 4.5 Data collected during preliminary testing scatter chart displays the energy consumption of two operating systems and one server in BIOS. The Energy consumption by operating system Table 4.2 breaks down the mean watts consumed during the one-hour observation period for each installed operating system. The greatest overall variation in power consumption appears to be the Windows Server 2003 R2 graphical user interface operating system which is also seen in the previous data set. It is interesting that in this case the Windows Server 2003 R2 operating system is the most energy efficient. The BIOS server, running no operating system is the least energy efficient. This data suggests the age of the server and the legacy operating systems impact the sample data collected. 4.4

Conclusion

This chapter presents the data collected in the research conducted for this dissertation. Data was presented from preliminary exploratory testing which provided some surprising results. The researcher had expected that a GUI based operating system would have the highest energy consumption. The Windows Server 2003 used 4.21 few kilowatts than Red Hat 5.7 and 5.29 fewer kilowatts than the same server with no

45 operating system. The preliminary data suggested that the age of the server and the legacy operating systems greatly impacted the data collected. The test custom micro server provided clear energy baseline data. Included in this chapter is a breakdown of the micro server energy consumption of the RAM, solid-state drive, and the server hardware as a whole before the addition of the operating systems. The observations performed on the micro server showed little difference in the energy consumed by the various non-GUI operating systems. However, there was clearly higher energy consumption when using a graphical user interface based operating system. In the next chapter, the data will be analyzed to understand the difference shown in the data.

46

CHAPTER 5 DATA ANALYSIS

5.1

Introduction

This chapter reviews and analyses the data presented in Chapter Four. Section 5.2 Operating system data: GUI vs. Non-GUI analyses the data from the Windows Server 2008 R2 datacenter operating system. This operating system was chosen for analysis because it has a version called Core available that does not run the graphical user interface of the standard installation of the operating system. This provides a direct method of measuring the differences in energy consumption between a GUI and a nonGUI operating system. 5.2

Operating system data: GUI vs. Non-GUI

The data collected confirmed the expected difference between GUI based operating systems and non-GUI. One would assume that there is more energy overhead in a system that is processing and rendering graphics than one that does not. Table 5.1 and Table 5.2 compare the Windows 2008 R2 datacenter thread and watt consumption for a period of ten minutes. The mean number of threads the GUI version of the OS is running is 365; the non-GUI mean was 256. This difference of approximately 109 threads may account for the overall difference in energy consumption. This also indicates that a reduction of 100 threads can save roughly one watt at the server level.

47 Table 5.1 OS: Windows 2008 R2 Datacenter (GUI)

Table 5.2 OS: Windows 2008 R2 Datacenter Core (Non-GUI)!

Time (Minutes)

Watts (Mean)

Time (Minutes)

9:42

Threads (Mean Number) 373

Watts (Mean)

3:16

Threads (Mean Number) 263

18.65

9:43

361

9:44

362

18.6

3:17

263

17

18.65

3:18

260

17

9:45

360

18.65

3:19

256

17.05

9:46

360

18.7

3:20

259

17.05

9:47

368

18.7

3:21

255

17.15

9:48

364

18.6

3:22

255

17.25

9:49

363

18.7

3:23

252

17.1

9:50

362

18.65

3:24

250

17.2

9:51

364

18.7

3:25

254

17.2

9:52

376

18.65

3:26

250

17.25

17.1

The server hardware as configured was found to have a mean energy consumption of 17.42 watts. As shown in Figure 5.1 Windows 2008 R2 versions vs. baseline hardware power consumption, the Core (non-GUI) version of Windows 2008 datacenter increases the power load by .15 watts from the baseline hardware load. The standard Windows 2008 R2 datacenter installation increases the power load from the baseline by 1.43 watts. This may not seem like much but this represents an increase of nearly ten times the load added by the non-GUI version. This is broken down in Table 5.1 and Table 5.2. Barebones refers to the server in Table 3.2 Shuttle XS35v2 Barebones Mini PC minus the solid-state drive and the RAM.

48

Windows%2008%R2%versions%vs.%baseline% hardware%power%consump5on% Windows!GUI!

18.85!

Windows!Core!

17.57!

Hardware!

16.5!

WaGs!

17.42!

17!

17.5!

18!

18.5!

19!

Figure 5.1 Windows 2008 R2 versions vs. baseline hardware power consumption 5.3

Preliminary data

The data collected from the Dell 1950 presented in Section 4.3 provided some insight on the energy consumption in legacy systems. This data shown in Table 3.1 Configuration of Dell PowerEdge 1950 shows more clearly the impact of the hardware and the effect of power management enabled by the operating system. From that data, it is evident that a server running with no operating system is less energy efficient than one that does have an operating system to enforce some power management. The lesson from this is that it is important not to have legacy systems powered on sitting with no operating system. It is also evident that the power management features in Windows Server 2003 are superior to those packaged with Red Hat 5.7. Older Linux systems often required the

49 addition of custom packages to implement power management. This dissertation is concerned more with the current state of the art rather than the past. As a result of the preliminary exploratory data it was determined that hardware that could provide a more linear relationship for the operating systems should be used. As previously mentioned the variable of interest was the impact of the graphical user interface on energy consumption not the difference in power management features among operating systems. Nonetheless, this data provides important insights. The first implication is just because it seems obvious that a non-GUI interface would be more energy efficient does not mean that the overall operating system will be more efficient due to other intervening variables such as power management. Second, always energy benchmark every configuration before deploying it on a large scale. Lastly, don’t leave machines running while they are doing nothing; this is especially important if it is a legacy machine. 5.4

Conclusion

This chapter analyzed the data presented in Chapter 4. Section 5.3 Preliminary data is analyzed. This data did not provide clear data on the variable of interest; therefore, the experiment was later performed with updated hardware and operating systems. The preliminary data was found to be useful to the topic of data center energy efficiency over all. The insights provided on legacy hardware and software are quite useful. Legacy systems require careful management to avoid excess energy consumption and are by nature less efficient. These systems should be replaced with newer hardware if possible. The watt readings from the micro server tests combined with the typeperf data provided insight into why the GUI operating system consumes more energy. The GUI version of the Windows Server 2008 R2 datacenter runs over 100 threads more than the

50 non-GUI Core version of Windows Server 2008 R2 datacenter. This 100-thread overhead represents more than one watt of additional energy consumption at the server level. Allowing for the cascade effect, this would be approximately three watts per server. The following chapter presents case studies of innovative data centers. This chapter will review the factors of data center efficiency and attempt to discover the commonalities that contribute to their energy efficiency. Each subject was chosen for energy efficiency leadership as expressed via power usage effectiveness (PUE).

51

CHAPTER 6 CASE STUDIES: INNOVATIVE DATA CENTERS

6.1

Introduction

This chapter presents case studies on innovative data centers. The subjects discussed in this dissertation are Otherworld Computing, Google, Microsoft, and Facebook. Each subject is reviewed based on factors that contribute to energy efficiency as discussed in the literature review. This chapter also presents discussion and findings based on triangulation performed on the case studies. 6.2

Otherworld Computing

Other World Computing (OWC) is an Illinois based hardware and Internet service provider founded in 1988. In 2006, OWC announced plans to build a new LEED Platinum certified campus during the company Christmas party. As of 2006 OWC had 78 employees and expected revenue of 40 million. (Other World Computing, n.d.) Other World Computing’s founder was 14 when he established this company in his hometown. Site selection in another part of the country was not a consideration for this project. However, founder Larry O’Connor is committed to sustainability and preserving the countryside of his community. (Chadha, 2011) 6.2.1

Building

The building project was designed to include “the company’s headquarters and a data center supporting its web hosting and ISP services.” (Miller, 2009, p. ¶ 1)

52 "Conservation is really at the core of our business" according to O’Connor. (Foresman, 2011, p. ¶ 4) OWC has long considered themselves in the business of extending the life of computers thereby keeping them out of the landfill. This commitment to sustainability resulted in the decision to build to LEED platinum standards rather than to industry standard specifications. Building to LEED Platinum specifications meant that the bank would not finance the more expensive project and as a result the project was primarily self-funded. The pay back period is also longer 10-14 years rather than the industry standard 5-6 year payback period. (OWC, n.d.) There are fewer than 400 LEED Platinum certified buildings in the United States. The building features low volatile organic compounds materials such as carpeting, and adhesives. The building also includes pervasive skylights and fiber-optic solar. All lighting shuts off when they sense adequate natural lighting. The geothermal HVAC system “includes highly efficient HEPA filters as well as UV filtering that eliminate nearly all dust, pollen, bacteria, and other pathogens from the air.” (Foresman, 2011, p. ¶ 11) Other features include reverse-osmosis filtration for all inbound water, parking “lot paved with permeable interlocking paving bricks,” and “landscaping featuring all-native plants.” (Foresman, 2011, p. ¶ 12) OWC recycles all recyclable material, uses all natural and biodegradable cleaners, and saves “approximately 80,000 gallons of water per year” through the use of waterless urinals. (Foresman, 2011, p. ¶ 13) “OWC received LEED Platinum certification in March 2010.” (Foresman, 2011, p. ¶ 16) The entire mixed-use facility is zero emission. 6.2.2

Power

“OWC's wind turbine came online in 2009. With the flick of a switch, OWC became the first technology manufacturer and distributor in the US to be fully powered

53 by on-site wind power.” (Foresman, 2011, p. ¶ 14) OWC advertises four levels of power redundancy. In addition to the wind turbine they are tied into the municipal power grid, have natural gas powered generators, and UPS Battery Backup. (Other World Computing, n.d.) The 37,000 square foot Platinum LEED certified data center produces an estimated 1,200,000 kWh of electricity per year. OWC installed a 500 kilowatt Vestas V39 wind turbine. This turbine is located in the west side of the building on a 131 foot tower and operates in wind speeds from 9 to 150 miles per hour. “The turbine can rotate a full 360 degrees to catch maximum wind flow.” (Foresman, 2011, p. ¶ 15) OWC sells about half of the energy generated by the wind turbine to the local utility as surplus energy (Other World Computing, 2012). “According to OWC's estimates, the excess energy produced by the turbine is enough to power a small suburban neighborhood.” (Foresman, 2011, p. ¶ 16) The data center and wind turbine were designed to work together. The 100% wind powered facility earned a U.S. Environmental Protection Agency (EPA) ENERGY STAR® rating. (PR Newswire, 2013) OWC serves as a model for a grid independent data center. 6.2.3

Hardware

True to their commitment to extending the life of hardware OWC uses vintage hardware in their operations. A “PowerMac 6100 runs component tests.” (Foresman, 2011, p. ¶ 12) There is no other published information regarding the server hardware that comprises this data center. Commodity hardware is the industry standard. Based on the company’s commitment to extending the usable life of hardware, they could be using commodity hardware and upgrading or replacing parts as necessary in the data center as well.

54 6.3

Google data centers

Google data centers vary in building design, but they pioneered the containerized data center by placing them in use as early as 2005 (Shankland, 2009). These containers house 1,160 servers and provide up to 250 kilowatts of power (Shankland, 2009). Google has 43 LEED certified projects. (Google) 6.3.1

Building

This case study will focus on the Lenoir, North Carolina data center. Lenoir has been the home of manufacturing facilities, now in decline, as a result the power and space required for data centers was readily available. This 1.2 billion dollar facility was located in Caldwell County because it had the “right combination of energy infrastructure, developable land, and available workforce for the data center.” (Google, n.d., p. ¶ 3) Google’s goal is to use as much renewable energy as possible. However, Google has found that “the places with the best clean power potential are not typically the same places where a data center can best serve its users.” (Google, n.d., p. ¶ 4) Therefore, Google locates “data centers where they can be most efficient, and” often buys “renewable energy from where it’s most efficiently produced.” (Google, n.d., p. ¶ 4) The Lenoir data center is decorated in a Nascar theme. Google has a virtual tour of the Lenoir data center. In 2013 Google announced plans to expand the Lenoir data center. This expansion will take advantage of “a new Duke Energy program aimed at getting major power consumers to use renewable energy.” (Frazier, Google announces $600M Lenoir data center expansion, 2013, p. ¶ 2) All of Google’s “US and European data centers have received voluntary ISO 14001 and OHSAS 18001 certifications.” (Google, n.d., p. ¶ 1) Google’s Lenoir data

55 center is one of several Google data centers with ISO 50001 energy management certification. (Kava, 2013) ISO 50001 is a framework of requirements for “organizations in all sectors to use energy more efficiently.” (ISO, n.d., p. ¶ 1) 6.3.2

Power distribution

Google also pioneered the elimination of the centralized UPS from the data center. The design used has a 12volt battery attached to the power supply for every server. The Google engineered distributed UPS increased UPS efficiency, from the 92 to 95 percent efficiency of the centralized traditional solution, to 99.9 percent (Shankland, 2009). This also increased the efficiency of the power distribution system by the reduction of “2 of the AC / DC conversion stages” (Google, n.d., p. ¶ 3). Increased efficiency in the power distribution system results in reduced waste heat produced which in turn results in cooling savings. 6.3.3

Servers

“The computing platform of interest no longer resembles a pizza box or a refrigerator but a warehouse full of computers … We must treat the data center itself as one massive warehouse-scale computer.” (Barroso & Hölzle, 2009, p. vi) There are about 49,923 servers operating at the Lenoir data center at a given time. (Levy, 2012) Google’s servers are fully customized, not only were they at the forefront of power supply efficiency, but also they introduced the notion of elegance into scale computing. Rather than opting for thousands of off the shelf servers designed to do a variety of tasks, Google designed their own servers omitting parts not needed (Google, n.d.). The omitted parts do not have the potential to draw power resulting in additional waste, further reducing the cooling load and resulting direct savings from not purchasing unnecessary components.

56

Figure 6.1 Google Custom Server Credit: Stephen Shankland/CNET (Google uncloaks once-secret server) The server pictured above is 3.5 inches thick and contains “two processors, two hard drives, and eight memory slots mounted on a motherboard built by Gigabyte.” (Shankland, 2009, p. ¶ 9) Google power supplies provide “only 12-volt power, with the necessary conversions taking place on the motherboard.” (Shankland, 2009, p. ¶ 3) The server also features a UPS battery that are 99.9 percent efficient. 6.4

Microsoft Dublin data center

The Dublin Microsoft data center has a PUE of 1.25 (Microsoft, n.d.). Other Microsoft data centers have PUEs of 1.6. The Dublin data center represents a significant step forward in efficiency both for the industry and Microsoft. This is Microsoft’s “largest data center outside the United States” (Microsoft, n.d., p. 1). This data center is actually one of the 5 largest data centers in the world. (Dallas South News) This data center houses web conferencing tools. The Dublin data center “will only employ

57 artificial cooling on a few days per year (Microsoft, n.d., p. 2).” The remainder of the year outside air is used for cooling. According to Microsoft 38 percent of the average data center energy consumption is used for artificial cooling (Microsoft, n.d.). 6.4.1

Building

Microsoft builds its data centers according to the LEED Green Building Rating System. (Microsoft) Microsoft’s data center consists of a facility designed to support a modularized data center structure comprised of units housed within pods as seen in the figure below. The prebuilt units allow for maximum flexibility and scalability as well as offering an inexpensive and efficient solution. Each pod can be independently customized for the task it is to serve. Pod units can be installed and upgraded without negatively impacting the rest of the data center. The Dublin data center was awarded three industry recognition awards for innovation and sustainability: Best European Enterprise Data Centre Facility by Data Centres Europe 2010, Data Centre Leaders

Figure 6.2 Microsoft Dublin data center (Anthony, 2013)

58 Award for Innovation, and European Commission's Code of Conduct for Data Centre Sustainability Best Practice. 6.4.1.1 Location and Cooling Dublin was chosen as the site for this data center largely because of the capacity to employ “free cooling.” Dublin’s cool air temperatures enable the data center to be operated without traditional mechanical cooling, such as CRAC Units, more than 95 percent of the time. The cooling used at this data center are air handling units (AHU) mounted on the roof. (Data Center Knowledge) This represents a potential power savings of about 38 percent (Microsoft, n.d.). This data center is operated at 75 degrees

Figure 6.3 Free Cooling Diagram (Data Center Knowledge)

59 Fahrenheit. (Bhandarkar, 2012) Hot isle containment systems are used to separate the cool air from the exhaust hot air. (Data Center Knowledge) 6.4.1.2 Power and servers The Dublin data center power distribution was also built consistent with modularity concepts. It has the capacity to provide up to 22.2 megawatts (Microsoft, n.d.). Microsoft uses traditional redundant diesel generators to avoid down time in the case of power loss. Primary power is provided by local utilities in Dublin. (Data Center Knowledge) Steve Ballmer announced that Microsoft has “something over a million servers in our datacenter infrastructure.” (Ballmer, 2013, p. ¶ 26) He went on to say “Google is bigger than we are. Amazon is a little bit smaller.” (Ballmer, 2013, p. ¶ 26) Sebastian Anthony of Extreme Tech estimates there to be as many as 100,000 servers in a large data center such as the Dublin data center pictured above. Microsoft has not publicized the number of servers being used, but they did contribute a Cloud Server Specification based on commodity servers. (Vaid, 2014) It is therefore likely that commodity servers are being used in the Dublin data center. 6.4.1.3 Software Microsoft pioneered power saving features in its operating systems. This was quite apparent in the preliminary study using the Dell PowerEdge 1950s conducted for this study. The server that used the least energy was the Windows 2003 server. At that time power saving features were new and not part of a basic Linux setup. Therefore, despite having the overhead of the GUI, the Windows server used less power due to power optimization. Windows server 2008 increased these savings by 10 %. (Microsoft)

60 The use of Windows server power optimization in combination with Microsoft’s virtualization software Hyper-V, results in major energy savings. 6.5

Facebook Prineville data center

Facebook invested two years designing the Prineville Oregon data center. (Higginbotham, 2011) They have combined and improved upon many efficient designs and principals pioneered by others. This effort has paid off resulting in a state of the art data center with a PUE of 1.07 (Higginbotham, 2011). Everything has been redesigned from the building to the server and all the components housed in the data center. Facebook open sourced hardware designs April 7th 2011 and since have put forth much time and effort into the cultivation of an open source hardware community (Higginbotham, 2011). 6.5.1

Building

Facebook’s data center was designed and located to maximize “free cooling.” They operate the facility at 85 degrees Fahrenheit. The facility is designed to use the warm air generated by the servers to heat offices during cool seasons (Higginbotham, 2011). 6.5.1.1 Power distribution The power distribution system runs at 277 volts rather than the standard 208. This allows for one less stage of conversion and therefore reduced energy loss (Higginbotham, 2011). The typical data center will lose 22 to 25 percent of power through conversion, while the Oregon facility loses seven percent of the power entering the building due to conversions (Higginbotham, 2011). The facility also features a more

61 efficient variation of the Google pioneered distributed UPS. The servers accept A/C and D/C. 6.5.2

Data center equipment

6.5.2.1 Servers Facebook’s original Open Compute design included two servers that have omitted unnecessary parts and replaced others with more efficient options. The chassis is 1.5u allowing space for “larger, and more efficient heat sinks” (Higginbotham, 2011, p. ¶ 4). The web server design uses an Intel chipset. The chassis is designed for maximum airflow as well as quick repairs and upgrades. The server is designed for snap together assembly and it is reported that one can be built in as little as three minutes (Higginbotham, 2011). The AMD version designed for I/O supports up to 192GB of memory, ideal to support memcached applications (Higginbotham, 2011). These servers consume 38% less power than previous Facebook servers. This represents a major step forward in sustainable web scale computing. 6.5.2.2 Software Facebook has discovered that the language used to develop and execute code has a significant impact. Facebook code was originally written in PHP largely due to the fact that this open source software was free and fit his student budget (Facebook Opens Up Its Hardware Secrets, 2011). The Linux, Apache, MySQL, PHP (LAMP) server platform provides a cost effective, intuitive development environment. Unfortunately, while the PHP language is easy to learn and provides a medium for rapid development, it does not execute as fast as other languages. Slow execution translates into higher energy

62 consumption and, more to be feared, frustrated users. In fact, Google found “that a significant number of users got bored and moved on if they had to wait 0.9 seconds” (Stross, p. 55). Facebook found that “PHP is about 39 times slower than C++ code.” (De Gelas, 11, p. ¶ 7) This resulted in the development of a tool named “HipHop” to translate PHP source code into C++ (De Gelas, Facebook's "Open Compute" Server tested, 11). Facebook also makes extensive use of Memcached which is distributed RAM caching that reduces the database load (De Gelas, Facebook's "Open Compute" Server tested, 11). Database calls tend to be slow for multiple reasons including large database size. Facebook overhauled Memcached in order to increase its fitness for use. According to AnandTech, Facebook became the “world's largest user of memcached and improved memcached vastly. They ported it to 64-bit, lowered TCP memory usage, distributed network processing over multiple cores” (De Gelas, Facebook's "Open Compute" Server tested, 11, p. ¶ 9) Haystack was designed by Facebook to manage the massive number of photos stored and shared by its customers. As of April 2009 Facebook hosted “over 15 billion photos” making “Facebook the biggest photo sharing website.” (Vajgel, 2009, p. ¶ 1) Haystack is a custom storage architecture that was three times faster than the 30 million dollar NetApp storage Facebook purchased to manage its ever-increasing requirement for photo storage (Schonfeld, 2009). This architecture removed unnecessary metadata to speed access and presentation of photos. The Haystack design also allowed Facebook to remove the requirement for expensive proprietary hardware (Schonfeld, 2009). Facebook’s investments in speeding up its code and reducing database calls have, undoubtedly, contributed to its success as a social media provider. It is also

63 likely that these improvements also would have contributed to its overall data center efficiency. It has been shown that slow processors, while more efficient than their faster contemporaries, do not always save energy due to extended processing time required to finish jobs. It would be reasonable to extrapolate that faster code execution would provide energy efficiency gains as well. 6.6

Discussion/findings

Google has introduced the idea of elegance into computing. Gold plating has been the industry standard. Efficiency requires elegance, determine what is required, and strip all else away. In computing, added components or processes result in wasted energy. Waste energy results in heat. Often this heat doubles the costs of operating data centers. Facebook has built upon the foundation laid by Google and achieved a PUE of 1.07 as seen in Figure 6.4 Power Usage Effectiveness Comparisons: PUE (Higginbotham,

Power%Usage%Effec5veness:%PUE% MicrosoR!

1.25!

Google!

1.11!

Facebook!

0.95!

PUE!

1.07!

1!

1.05!

1.1!

1.15!

1.2!

1.25!

Figure 6.4 Power Usage Effectiveness Comparisons: PUE

1.3!

64 2011). Facebook has open sourced their designs and specifications in hopes of getting together a community to work on energy efficiency. The tools are available to improve data center efficiency. The challenge now is to continue the improvement and make energy efficiency attainable by all data centers. Table 6.1 Aspects of Energy Efficiency Matrix displays the key areas as a matrix to triangulate the commonalities between the data centers examined in this dissertation. Reviewing the data in Table 6.1 Aspects of Energy Efficiency Matrix below it appears that the most efficient data centers were built to suit. In short, the form of the data centers and the fit of the components were designed around the function of the data center. The end result is a highly energy efficient data center that serves the needs of these high profile companies by design. Table 6.1 Aspects of Energy Efficiency Matrix Building Building Certification Type Google St. Ghislain, Belgium PUE 1.11

unknown

Microsoft unknown Dublin, Ireland PUE 1.25 (Microsoft, n.d.) Facebook Gold Prineville, LEED Oregon (Facebook, 1.07 2011) (Higginbotham, 2011) OWC Platinum LEED

unknown

Primary Energy Source unknown

custom: unknown pod design ITPAC (Clark, 12)

custom

custom

UPS

Servers

Cooling

Power Data Center Distribution Temperature

distributed: custom 12v on power supply

free cooling: unknown water-side economization (Miller R. , 2011) unknown free cooling: unknown Air-side economization (Miller R. , 2011)

80F (Google, n.d.)

unknown

Up to 95F (Miller C. , 11)

Grid: Distributed: custom free cooling Coal Rack and nuclear (Rogoway, 11) wind unknown unknown Geothermal turbine (OWC, n.d.)

480v 85F (Rogoway, (Higginbotham, 11) 2011)

unknown

unknown

65 6.7

Conclusion

The most energy efficient data centers appear from the information collected during this dissertation to be purpose designed and built. Those included in this dissertation had primary business functions that are computing based. It is very likely that this focus provided the impetus to create energy efficient data centers. The energy savings would be very transparent as good investments for these companies. Less money spent on energy equates to competitive advantage. These data centers were not the work of one person or even a team of information technologists. Much of the success of these companies is the result of the fact that the data centers are the lifeblood of the organization and resources were allocated accordingly. The servers, buildings, HVAC, electrical, as well as many other systems were designed to suit the function of the data center’s unique needs. These achievements involved electrical engineers, architects, computer hardware engineers, just to name a few. Most importantly, these people worked closely with business and IT units to ensure that all needs were met while avoiding waste. These lowest PUEs have been achieved by industry elites that served as case study subjects. While several of the companies were the “Titans” of modern web scale computing, it is worth mentioning that the only net-zero data center was a much smaller provider of computing services. Not every data center has the luxury of being the focus of the organization’s business. Many function in retrofitted space that was adapted to serve as a data center. Most may never have built for purpose space designed and allocated. However, even retrofitted space can benefit from interdisciplinary

66 collaboration to plan the way forward to a more sustainable data center and therefore a more cost effective organization. The following chapter provides recommendations for increasing data center efficiency. This chapter has shown the importance of interdisciplinary collaboration. The section educating for datacenter efficiency presented in Chapter 7 will provide possibilities for educating future information technology professionals trained to work in these teams. The grid independent datacenter framework will be presented as well which will incorporate many of the energy saving concepts revealed in this dissertation.

67

CHAPTER 7 THE WAY FORWARD

7.1

Introduction

This chapter presents recommendations for increasing future data center efficiency. These recommendations include educating for data center efficiency and the grid independent data center framework. Educating for data center efficiency focuses on education for future data center efficiency stakeholders. The grid independent data center framework is based on the finding of this dissertation’s research. The grid independent framework contains a number of recommendations for employing the findings of research presented in this dissertation to create a grid independent data center. 7.2

Educating for data center efficiency

One of the major barriers to data center efficiency is that the culture of IT does not focus on on-going management. Education is geared toward creation such as writing code for an application or discovering faster algorithms. These are important and sometimes exciting aspects of IT. However, the operations and maintenance portion comprise approximately 80% of the life cycle. Institutions that fail to provide future technologists with the tools to perform operations and maintenance are failing to cover approximately 80% of information technology work. Most in IT do not aspire to work in infrastructure and tend to have only a vague idea of what server administrators do. Very few server administrators were formally

68 educated for the job, yet they make the decisions that determine the stability and sustainability of computing resources. They tend to be overly conservative in making hardware recommendations. This is due to the fact that they often bear the brunt of the unhappy customer and will be paged in the night to bring the system back online if there is an outage. Therefore, the hardware is often grossly oversized. However, if administrators were educated in the basic principals of data center efficiency, perhaps this tendency to oversize hardware could be overcome. Thereby increasing data center efficiency through education and awareness. Those in infrastructure need to know the basics of data center design and energy efficiency. Unfortunately, there is little possibility of knowing the job description for future IT graduates. Fortunately, even those who do not work in infrastructure will benefit from knowing the basics of data center design and efficiency. Those who go on to become software programmers may think more carefully about the efficiency of their code based on seeing the cumulative effects. Therefore, an overview of this information should be presented as a core part of any computer information technology program. Coursework should build upon the principle that technologists need to know the basic components of a data center, how to calculate PUE, and the major factors impacting data center energy efficiency. It is also important to understand the cumulative affect of energy consumption at the server level. This information should be included in an overview course. Data center energy efficiency is such a critical portion of the information technology field that I would propose that in addition to an overview course at least two additional courses be offered. One would be a hands-on more in depth data center efficiency lab and another on data center management.

69 7.3

Introduction to grid independent inspiration

Space shuttles have used computers less powerful than the modern calculator (Magee, 1999). Most calculators today are powered by solar cells that work even indoors. Yet, the computing industry has failed in creating energy efficient servers. Perhaps server designers need to take a few steps back and look at server design from a different perspective. It cannot be impossible to design servers that could be solar powered. It would certainly take some reengineering and perhaps solar computers would be very specialized. Working from such a perspective could result in major advances in efficiency. Another perplexing thought is that laptops can now run from batteries for five plus hours and weigh less than three pounds. In contrast, the standard central UPS requires specialized wiring and fills a large sized room and only provides power enough for a proper shutdown. UPSs are arguably more complex than laptop batteries providing power conditioning as well as emergency power. Today's UPSs are more energy efficient than their predecessors. However, UPS design seems to be stagnated and possibly anachronistic. Perhaps UPS designers have been working from overly limited design constraints thereby crippling the possibility of creating a more useful UPS. Might computing be in a better situation if those who work on laptop power and battery management as well as those who have done the same for powerful solar calculators were put together to work on server power management? Certainly, if one took a data center and filled it with Mac Airs the machine cost would be very high. However, if the non-essential components were removed the cost could be reduced and the resulting servers would work wonderfully with intermittent

70 power sources such as solar or wind. Is this possible? The work done in the Blink experiment suggests that it is very possible for at least web servers (Sharma, Barker, Irwin, & Shenoy). This method may not be feasible for all server applications. However, there is a large market just for web servers. The importance of such a design would be evident if more server design began from such a standpoint and were adapted from this form to fit the specific server function required. Adapting solar calculator design to server format may be a long shot, but adapting the ultra light laptop design is not. 7.4

Net-Zero data center framework

The primary concept behind the grid independent framework is one of leanness; if the server uses no more energy than is absolutely required and the operating system and all application code is optimized for speedy and energy efficient execution how much can waste heat be reduced? In addition, removing the necessity for components that are not strictly required for computation would also save on energy and waste heat. Could the requirement for complex cooling and possibly many fans be eliminated? This design makes the assumption that these things would be possible. 7.4.1 Location Location is a crucial factor for Net-Zero data center design. The lowest PUEs reported were taking advantage of free cooling. Free cooling can be used at any location, as the ambient temperatures will be lower than the data center exhaust temperatures. However, the goal of Net-Zero energy consumption is far more attainable in climates that provide the maximum number of hours of free cooling. The other important factor is availability of renewable energy sources. In this case, wind energy was chosen as the

71 primary energy source. This is because wind is plentiful and does not have the side effect of heat that would come with the high availability of solar power. Figure 7.1 U.S. Free cooling and installed wind power map depicts two maps overlaid. One is from the Green Grid and displays hours of free cooling; the other is from the U.S. Department of Energy and displays the current installed wind power capacity in Megawatts (MW). The color

Numbers on the states represent installed wind power. Figure 7.1 U.S. Free cooling and installed wind power map represent the number of hours where the dry bulb temperature is less than 82 degrees Fahrenheit and the dew point is less than 60 degrees Fahrenheit. Washington and Oregon appear to be ideal choices based on the intersection of these maps. There are other factors to consider as well such as availability of the electricity. For example, Indiana is capable of producing four times the energy required by the state and therefore could be a good choice based on possible excess energy availability

72 (American Wind Energy Association, 2011). There would be the separate geographical locations that would house the proposed grid independent data centers as depicted in Figure 7.2 Proposed active/active cluster configuration. 7.4.2 Building The Net-Zero data center would be collocated with wind turbines in order to reduced energy loss through transmission lines allowing the data center to have the

Figure 7.2 Proposed active/active cluster configuration maximum benefit of the power generated by the wind turbine. The data center would be housed in a shipping container. One reason for this is that a shipping container based data center takes less time to deploy than a custom-built data center. In addition, the possibility for mobility is a bonus as the data center could be moved if there were damage to the powering turbine to reduce down time. Shipping containers also allow for 10 times the average server density (Microsoft, n.d.). Airside economizers are commercially available for shipping containers and these containers have been used successfully in

73 industry. In fact, Marenostrum with a current PUE of 1.3 would have an estimated PUE of 1.113 by containerizing the data center (Torres, 2011). Jordi Torres of the Barcelona Supercomputing Centre explains that the increase in energy efficiency is due to “more precise control of airflow within the container” (Torres, 2011, p. 26). Shipping containers also have the advantage of being very economical and readily available. 7.4.3 UPS No centralized UPS would be used in the Net-Zero design. More research and development would be required to produce the UPS alternative. Rather than a UPS, a custom rechargeable power storage array would be employed. This would be the most radical part of the design. The design used in the “Blink” experiment could be a starting point to refine for use. The “Blink” experiment FAWN ten-node web cluster was configured using: (R)enewable energy sources in our deployment to a battery array that includes two rechargeable deep-cycle ResourcePower Marine batteries with an aggregate capacity of 1320 watt-hours at 12V, which is capable of powering our entire cluster continuously for over 14 hours. (Sharma, Barker, Irwin, & Shenoy, p. 4) 7.4.4 PDU The Net-Zero data center design would not include a PDU. The data center would follow the example set forth by data centers outside of the United States who do not require a PDU by running higher voltage (Pratt, Kumar, & Aldridge). This would be achieved by designing the data centers electrical system to work with the wind turbine. Therefore reducing as many conversion steps as possible, and eliminating the requirement for a PDU.

74 7.4.5 Power Supply The power supply is an important component in eliminating the need for a PDU. Using an energy efficient power supply that is designed to accept 230Vac would eliminate the need for a PDU assuming “building entrance voltage is 400Vac” (Pratt, Kumar, & Aldridge, p. 33). The power supply would then step the current down to 12V for distribution to most of the server components (Pratt, Kumar, & Aldridge). High efficiency voltage regulators would be used for other conversions to avoid energy waste caused by low quality voltage regulators. 7.4.6 Server The Net-Zero server is designed for transactional jobs in this case the servers will be designed as web servers. The servers will follow the lead of Google and Facebook designs and include only necessary components. The final servers will be tested to ensure those only absolutely necessary components are included. Components that will not be included are video cards, cd rom drives, and hard disk drives. Several designs will be configured and tested for inclusion in the Net-Zero data center. Hardware configuration and testing is outside of the scope of this due to time and cost limitations. It was not possible to test various hardware configurations during this dissertation due to resource limitations. This is recommended for further studies for bringing a physical design to fruition.

75 7.4.7 Storage Improving storage over the standard hard disk drive storage appears to be an area that could provide some energy efficiency gains. Solid-state drives (SDD) provide a

Figure 7.3 Logical request-storage access flow linear energy to usage relationship; unfortunately, there are drawbacks to this type of storage. First, is the expense and second is that they have a limited write reliance compared to hard disk drives. These are substantial drawbacks in enterprise or web scale applications. Memcached is a fast option, but has drawback in terms of using RAM, which is volatile and relatively expensive. However, for often-accessed information the performance boost could be worth the trade off. For the net-zero data center a combination of methods would be employed to reduce the in efficiency of the storage. The proposed solution, shown in Figure 7.3 Logical request-storage access flow, would use monitoring/policy to implement the combined solution that would use memcached, SSD, and HDD to gain both energy efficiency and performance boosts. Requests would be catalogued to maintain a list of

76 the most accessed or “popular” queries. Those would be maintained in memcached whenever possible. Other “non-popular” query requests would be routed to SSD storage. Lastly, write requests would be routed to the HDD array. 7.4.8 Software and Policy Policy would be employed and software control mechanisms would automatically implement established policies. Scheduling would be used for batch jobs based on the availability of renewable energy sources. As previously mentioned, web services and data collection tend to need real-time processing, however there are always batch jobs that need to be run. These include running antivirus checks and many statistics used for business intelligence (BI) reports. These jobs along with data deduplication would be run using a policy that would determine the most ideal time to run the application based on availability of renewable power and resource consumption. 7.5

Conclusion

This chapter discussed recommendations for improving data center efficiency. Specifically this chapter presented a section on educating for data center efficiency and a section on the grid independent data center framework. The grid independent data center framework was formulated through the findings of this dissertation’s research. This framework provides a cohesive logical framework for a cluster of data centers powered by intermittent renewable energy sources. The following chapter summarizes the dissertation. It provides an overview of the research, deliverables, and contributions of this dissertation.

77

CHAPTER 8 CONCLUSION

8.1

Introduction

This dissertation was designed to evaluate the feasibility of creating a net-zero energy, grid independent and high availability data center. The aim of this research is to reduce data center non-renewable resource power consumption. This was to be achieved through creation of a high-level net-zero data center design framework. Current literature was reviewed in order to evaluate the factors impacting data center efficiency. Research experimentation was conducted to evaluate the possible energy saving potential of employing non-GUI based operating systems. Innovative data centers were also reviewed and evaluated to identify methods and tools that improve data center energy efficiency. This chapter summarizes the dissertation presenting an overview of the research, deliverables, and contributions of this dissertation. This chapter also presents recommendations for future work as well as concluding this dissertation. 8.2

Research Overview

Valuable information was revealed during the course of the dissertation research. The case study analysis revealed that the most energy efficient data centers were designed by interdisciplinary teams to fit the primary purpose of the data center. Some form of free cooling was used in all the designs and the location of the data center was chosen to support the free cooling choice. Not all the cases provided detailed information

78 on the servers. Therefore, some experimentation was necessary to reveal energy consumption information at the server level. Experimentation revealed that non-GUI operating systems provide a significant advantage over graphical user interface based server operating systems. The literature review provided vital information about the efficiency of data center components as well as direction as to which components might be eliminated. For example, outside of the United States power distribution units are not required. This is because other countries run 220 volts standard eliminating the need to convert down to 120 volts as we do in the United States. These conversions are wasteful and require the purchase of additional equipment; therefore, it would be logical to remove the requirement for the PDU in the data center of the future. 8.3

Experimentation, Evaluation and Limitation

Experimentation was performed to explore possible energy savings from the use of a non-graphical user interface operation system. This seemingly simple task was fraught with difficulties. First, the experimenter had to learn the basics of energy and energy measurement. Once this was accomplished, there were technical difficulties. The first set of hardware provided for experimentation was not designed for use with modern operating systems. In addition, the first meter available for use would not interface with the experimenter’s computers for data transfer. Fortunately, Purdue’s Engineering Computing Network data center had identical servers available for energy benchmarking already loaded with operating systems that had the potential to show difference in the variable of interest. Unfortunately, the data collected while valuable did not provide insight into the variable of interest. The data collected was valuable in understanding the type of server environment that would be

79 needed to examine the possible difference between a GUI and non-GUI based server operating system. Finally, the micro server was designed and built for observation of the variable of interest. The Fluke 43b used in the previous experiment was on loan from the Purdue Electrical Engineering Technology department and was not available for the final experiment. The researcher was able to find an adequate, easier to use, though less sophisticated device to measure the operating system energy consumption. Ultimately the data collected provided information that could not be located in secondary research. This data clearly showed, when combined with the typeperf information, the differences in energy consumption between a graphical user interface based operating system and a non-GUI operating system.

8.4 •

Deliverables of Dissertation

Evidence supporting or refuting a relationship between operating system graphical user interface and power use



Grid independent data center framework



Case Studies



Suggestions for future improvements

8.5

Contributions to the Body of Knowledge

This dissertation has contributed to the data center energy efficiency body of knowledge. Specifically, the experiments conducted and the observation data obtained provide a clear energy efficiency advantage for the use of non-graphical user interface

80 operating systems in data centers. This dissertation has also shown that due to the cumulative effect, it is appropriate to focus first on watt reductions at the server level. Therefore, designing and creating a server optimized for energy efficiency is the key element to an energy efficient data center. This is because if energy is not wasted through computing operations the resulting waste heat is avoided. Therefore, reducing the cooling load and the energy not pulled through the power system does not incur conversion losses also resulting in heat waste. The literature review and case studies uncovered a number of areas for possible improvement. Some of the improvements are easy to implement such as raising the server room temperature to 80-95 degrees Fahrenheit provided that the data center’s hardware warranties allow it. Others are more administration intensive such as managing server resources to ensure optimum resource utilization. Some are more easily implemented together with a data center redesign such as removing unnecessary components especially those that perform energy conversions. These include eliminating: Central UPS, and PDU energy conversions. Improvements in cooling efficiency are also important considering that often half of the data center energy budget is used in cooling the IT equipment. Within the data center, it is important to separate hot air for efficient removal this can be done fairly inexpensively. Implementing free cooling is a more expensive project but the improvement in data center PUE indicated in the case studies show that the expense is warranted. Free cooling is based on the concept of bringing in air from the outside and cooling it. The reason for this is that cooling hot air from the servers is more expensive due to the fact that exhaust heat is about 115 degrees Fahrenheit. Outside air is generally

81 lower than this in most of the world. Therefore, it makes more sense to exhaust hot air outside and bring cooler air inside to condition. Combine this with data center air maintained at 80-95 degrees Fahrenheit and there would be little need for cooling. An effective means of increasing energy efficiency is ensuring the knowledge is in the hands of decision makers. It will take time to implement these changes, but academia can help by making sure graduates have clear understanding of data center energy efficiency basic principals. 8.6

Future Work and Research

There is much work to be performed in the field of data center energy efficiency. Educating the future information technology workforce is a strongly recommended method to improve data center efficiency. In addition, more research is recommended into software configuration, as indication that this could play a roll in trimming watts at the server level was found in this dissertation. Those managing data centers would be wise to do energy benchmarking on hardware and software configurations before making major investments. In addition, it is necessary to test the grid independent design framework in a real world setting. Implementing a containerized data center on a wind field would certainly prove the design’s worth on a small scale. There is much research and design necessary to begin implementation. However, with an interdisciplinary team the details could be molded into an implementable design in a matter of months. Once the concepts are proven, there is little that would stand in the way of gaining large-scale adoption. Working through an open source hardware group such as Open Compute could prove invaluable to bringing such a design to fruition.

82 8.7

Conclusion

This chapter has reviewed the research findings, experiment, deliverables, contributions, future work and research recommendations. It has found through experimentation and research that creation of a data center designed to be operated primarily from intermittent renewable energy resources is feasible. Through implementation of the findings of this dissertation and extending research many efficiency gains are possible that would allow current technology to be used to create a grid independent data center prototype. This dissertation concludes that the best way to increase energy efficiency is through education in the principals of data center energy efficiency. Sufficient information is available to reduce data center energy consumption by nearly half. Combine these applications to enable the use of renewable energy for data centers and we could reduce the world non-renewable energy consumption by one percen

LIST OF REFERENCES

83

LIST OF REFERENCES

American Wind Energy Association. (2011, 5). Wind Energy Facts: Indiana. Retrieved 2 22, 2012, from American Wind Energy Association State Fact Sheets: http://www.awea.org/learnabout/publications/factsheets/factsheets_state.cfm Anthony, S. (2013, 7 19). Microsoft now has one million servers – less than Google, but more than Amazon, says Ballmer. Retrieved 4 24, 2014, from Extreme Tech: http://www.extremetech.com/extreme/161772-microsoft-now-has-one-millionservers-less-than-google-but-more-than-amazon-says-ballmer Ballmer, S. (2013, 7 8). Steve Ballmer: Worldwide Partner Conference 2013 Keynote. Retrieved 4 25, 2014, from Microsoft: http://www.microsoft.com/enus/news/speeches/2013/07-08wpcballmer.aspx Barielle, S. (2011, 11). Calculating TCO for Energy. Retrieved from IBM Systems Magazine: http://www.ibmsystemsmag.com/mainframe/BusinessStrategy/ROI/energy_estimating/ Barroso, L., & Hölzle, U. (2009). The Datacenter as a Computer An Introduction to the Design of Warehouse-Scale Machines. Synthesis Lectures on Computer Architecture. BCS. (n.d.). Programme Overview. Retrieved 11 9, 2011, from CEEDA: http://www.ceeda-award.org/overview.php

84 BCS. (n.d.). The CEEDA Assessment - What will be assessed? Gold. Retrieved 11 9, 2011, from CEEDA: http://www.ceeda-award.org/gold-assessed.php Beltra ̃o Costa, L., Al-Kiswany, S., Vigolvino Lopes, R., & Ripeanu, M. (n.d.). Assessing Data Deduplication Trade-offs from an Energy and Performance Perspective. Workshop on Energy Consumption and Reliability of Storage Systems (ERSS) (pp. 1-6). Orlando: IEEE. Retrieved from Matei Ripeanu: http://www.ece.ubc.ca/~matei/ExM.Internal/erss2011_submission_7.pdf Bhandarkar, D. (2012, 3 5). Microsoft’s Dublin Data Center Grows with Enhanced Efficiency and Sustainability. Retrieved 4 25, 2014, from http://www.globalfoundationservices.com/posts/2012/march/5/microsofts-dublindata-center-grows-with-enhanced-efficiency-and-sustainability.aspx Biancomano, V. (2010, 7 1). EnergySart or no, power supplies deliver. Retrieved 10 15, 2011, from Energy Efficiency & Technology: http://eetweb.com/powersupplies/energystar-power-supplies-going-dark-201007/ Central Intelligence Agency. (n.d.). Country Comparison :: Electricity - consumption. Retrieved 3 12, 12, from The World Fact Book: https://www.cia.gov/library/publications/the-worldfactbook/rankorder/2042rank.html Chadha, R. (2011, 5 13). Back on the Radar: Other World Computing and going green. Retrieved 4 12, 2014, from Crain's Chicago Business : http://www.chicagobusiness.com/article/20110513/BLOGS06/305139989

85 Clark, J. (12, 2 24). Microsoft expands vast Dublin datacentre. Retrieved 2 26, 12, from ZDNet: http://www.zdnet.co.uk/blogs/mapping-babel-10017967/microsoftexpands-vast-dublin-datacentre-10025482/ Dallas South News. (n.d.). The 5 Biggest Data Centers in the World. Retrieved 4 24, 2014, from Dallas South News: http://www.dallassouthnews.org/2013/03/22/the5-biggest-data-centers-in-the-world/ Data Center Knowledge. (n.d.). Dublin Data Center: Generators. Retrieved 4 25, 2014, from Data Center Knowledge: www.datacenterknowledge.com/inside-microsoftsdublin-mega-data-center/dublin-data-center-generators/ Data Center Knowledge. (n.d.). Dublin Data Center: Rooftop Air Handlers. Retrieved 4 24, 2014, from Data Center Knowledge: http://www.datacenterknowledge.com/inside-microsofts-dublin-mega-datacenter/dublin-data-center-rooftop-air-handlers/ Data Center Knowledge. (n.d.). Microsoft’s Dublin Data Center: Server Pods. Retrieved 4 25, 2014, from Data Center Knowledge: http://www.datacenterknowledge.com/inside-microsofts-dublin-mega-datacenter/microsofts-dublin-data-center-server-pods/ Data Center Knowledge. (n.d.). Wind-Powered Data Centers. Retrieved 4 12, 2014, from Data Center Knowledge: http://www.datacenterknowledge.com/wind-powereddata-centers/ De Gelas, J. (11, 3 11). Facebook's "Open Compute" Server tested. Retrieved 3 11, 11, from AnandTech: http://www.anandtech.com/print/4958

86 De Gelas, J. (2010, 1 18). Dynamic Power Management: A Quantitative Approach. Retrieved 16 11, 11, from AnandTech: http://www.anandtech.com/print/2919 Emerson. (n.d.). Energy Logic: Calculating and Prioritizing your data center IT Efficency Actions. Retrieved 3 13, 2012, from Efficient Data Centers.com: http://www.efficientdatacenters.com/edc/docs/EnergyLogicMetricPaper.pdf Energy Star. (2011, 11 10). Purchasing More Energy-Efficient Servers, UPSs, and PDUs. Retrieved from Energy Star: http://www.energystar.gov/index.cfm?c=power_mgt.datacenter_efficiency_purch asing Facebook. (2011, 11 18). Prineville Data Center Receives LEED Gold Certification. Retrieved 3 11, 12, from Facebook: https://www.facebook.com/note.php?note_id=10150367156038133 Facebook Opens Up Its Hardware Secrets. (2011, 4 7). Retrieved 10 11, 2011, from Technology Review: http://www.technologyreview.com/printer_friendly_article.aspx?id=37317 Fan, X., Weber, W., & Barroso, L. (2007). Power Provisioning for a Warehouse-sized Computer. ISCA '07 Proceedings of the 34th annual international symposium on Computer architecture (pp. 13-23). San Diego: ACM. FAWN. (n.d.). Introducing the FAWN. Retrieved 11 15, 2011, from FAWN: http://www.cs.cmu.edu/%7Efawnproj/ Fehrenbacher, K. (2010, 9 10). Yahoo’s Chicken Coop-Inspired Green Data Center . Retrieved 10 2, 2011, from Gigaom: http://gigaom.com/cleantech/now-onlineyahoos-chicken-coop-inspired-green-data-center/

87 Foresman, C. (2011, 10 25). Exploring Other World Computing’s super-green headquarters. Retrieved 4 12, 2014, from Ars Technica: http://arstechnica.com/apple/2011/10/exploring-other-world-computings-supergreen-headquarters/ Frazier, E. a. (2011, 7 31). Data Centers’ Power Use Less Than Was Expected. Retrieved 3 11, 2012, from New York Times: http://www.nytimes.com/2011/08/01/technology/data-centers-using-less-powerthan-forecast-report-says.html?_r=1 Frazier, E. a. (2013, 4 19). Google announces $600M Lenoir data center expansion. Retrieved 4 19, 2014, from Charlotte Observer: http://www.charlotteobserver.com/2013/04/19/3991902/google-announces-600million-lenoir.html#storylink=cpy Goiri, I., Julia, F., Nou, R., Berral, J., Guitart, J., & Torres, J. (2010). Energy-aware Scheduling in Virtualized Datacenters. 2010 IEEE International Conference on Cluster Computing (pp. 58-67). IEEE Xplore. Google. (n.d.). Cooling. Retrieved 2 12, 12, from Data center efficiency : http://www.google.com/about/datacenters/inside/efficiency/cooling.html Google. (n.d.). Data center best practices. Retrieved 9 11, 2011, from Google Data Centers: http://www.google.com/about/datacenters/best-practices.html Google. (n.d.). Data center best practices. Retrieved 9 11, 2011, from Google Data Centers: http://www.google.com/about/datacenters/best-practices.html Google. (n.d.). Data center efficiency. Retrieved 10 5, 2011, from Google.com: http://www.google.com/about/datacenters/inside/efficiency/servers.html

88 Google. (n.d.). Efficiency-measurements. Retrieved 7 1, 2011, from Google Data center: http://www.google.com/corporate/datacenter/efficiency-measurements.html. Google. (n.d.). Efficiency-measurements. Retrieved 7 1, 2011, from Google Data center: http://www.google.com/corporate/datacenter/efficiency-measurements.html. Google. (n.d.). Efficient computing. Retrieved 9 11, 2011, from Google Datacenters: http://www.google.com/corporate/data center/efficient-computing/index.html Google. (n.d.). Frequently asked questions. Retrieved 4 19, 2014, from Google Data Centers: http://www.google.com/about/datacenters/faq/ Google. (n.d.). Google Green. Retrieved 4 19, 2014, from Google: https://www.google.com/green/efficiency/oncampus/#building Google. (n.d.). Google Lenoir . Retrieved 4 19, 2014, from Google Careers: http://www.google.com/about/careers/locations/lenoir/ Google. (n.d.). Lenoir, North Carolina . Retrieved 4 19, 2014, from Google Data Centers: http://www.google.com/about/datacenters/inside/locations/lenoir/ Google. (n.d.). Temperature management . Retrieved 2 12, 12, from Data center efficiency : http://www.google.com/about/datacenters/inside/efficiency/temperature.html Grundberg, S., & Rolander, N. (11, 9 12). For Data Center, Google Goes for the Cold . Retrieved 2 12, 12, from Walstreet Journal Technology: http://online.wsj.com/article/SB1000142405311190483610457656055100557081 0.html

89 Hamilton, J. (2009, 4 1). Data Center Efficiency Best Practices. Retrieved from You Tube: http://www.youtube.com/watch?v=m03vdyCuWS0&feature=relmfu Hamilton, J. (2009, 4 1). Data Center Efficiency Best Practices. Retrieved 12 13, 2011, from You Tube: http://www.youtube.com/watch?v=m03vdyCuWS0&feature=relmfu Hewlett-Packard Corporation, Intel Corporation, Microsoft Corporation, Phoenix Technologies Ltd.,Toshiba Corporation . (2011, 12 11). Advanced Configuration and Power Interface Specification Revision 5.0. Retrieved 4 21, 2012, from ACPI: http://www.acpi.info/DOWNLOADS/ACPIspec50pdf.zip Higginbotham, S. (2011, 4 7). Facebook Open Sources Its Servers and Data Centers. Retrieved 2011, from Gigaom: Google engineered distributed UPS Hoelzle, U., & Weihl, B. (2006, 9). High-efficiency power supplies for home computers and servers. Retrieved 9 11, 2011, from Google: http://static.googleusercontent.com/external_content/untrusted_dlcp/services.goog le.com/en/us/blog_resources/PSU_white_paper.pdf Hoelzle, U., & Weihl, B. (2006, 9). High-efficiency power supplies for home computers and servers. Retrieved 9 11, 2011, from Google: http://static.googleusercontent.com/external_content/untrusted_dlcp/services.goog le.com/en/us/blog_resources/PSU_white_paper.pdf ISO. (n.d.). SO 50001 - Energy management. Retrieved 4 19, 2014, from ISO: http://www.iso.org/iso/home/standards/management-standards/iso50001.htm Jennings, R. (2011, 9 23). Cheap Data Center... Green Data Center. Retrieved 10 13, 2011, from Cheap Data Center... Green Data Center

90 Jennings, R. (2011, 9 23). Cheap Data Center... Green Data Center. Retrieved 10 13, 2011, from HP: h30565.www3.hp.com/t5/Feature-Articles/Cheap-Data-CenterGreen-Data-Center/ba-p/429 Kava, J. (2013, 7 24). Pushing our energy performance even higher with ISO 50001 certification. Retrieved 4 19, 2014, from Google Green Blog: http://googlegreenblog.blogspot.com/2013/07/pushing-our-energy-performanceeven.html Larabel, M. (12, 1 17). Intel Has 50 Patches For ACPI/Power In Linux 3.3. Retrieved 4 21, 12, from Phoronix: http://www.phoronix.com/scan.php?page=news_item&px=MTA0NDE Levy, S. (2012, 10 17). Google Throws Open Doors to Its Top-Secret Data Center . Retrieved 4 19, 2014, from Wired: http://www.wired.com/2012/10/ff-insidegoogle-data-center/all/ Magee, M. (1999, 12 16). Intel Coppermines won't go in rockets… phew But x86 technology does soar to the skies. Retrieved 2 28, 2012, from The register: http://www.theregister.co.uk/1999/12/16/intel_coppermines_wont_go/ Martin, J. (2011, 2 21). Building a Data Center | Part 1: Follow the Flow. Retrieved 11 9, 2011, from SoftLayer: http://blog.softlayer.com/2011/building-a-data-center-part1-follow-the-flow/ Metz, C. (12, 1 26). Google Reincarnates Dead Paper Mill as Data Center of Future. Retrieved 2 12, 12, from Wired Enterprize: http://www.wired.com/wiredenterprise/2012/01/google-finland/

91 Microsoft. (n.d.). Environmental Sustainability. Retrieved 4 24, 2014, from Microsoft: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rj a&uact=8&ved=0CDEQFjAB&url=http%3A%2F%2Fdownload.microsoft.com% 2Fdownload%2F4%2F6%2F2%2F46234FAE-A9FA-4840-802E8FF7FA8E1C65%2FEnvironmental_sustainability.docx&ei=1pJZU63ACJSvyAS 69YDIAw&usg=AFQjCNHMpwt7KMbQEISDS349VVdM1gnnxg&sig2=PlDB7 ryW80s3BJfZwJC8Lg&bvm=bv.65397613,d.aWw Microsoft. (n.d.). Greening the data centre, the Dublin Data Centre case study. Retrieved 12 13, 2011, from EMEA Press Centre-A regional gateway to Microsoft: http://download.microsoft.com/download/6/8/F/68F6C057-7ED4-440C-81A9E289AACFB3DA/DublinDataCentreCasestudy_FINAL.pdf Microsoft. (n.d.). Greening the Dublin data center. Retrieved 2 22, 2012, from Microsoft: download.microsoft.com/.../DublinDataCentreCasestudy_FINAL.pdf Microsoft. (n.d.). Greening the Dublin data center . Retrieved 2 12, 12, from Microsoft: http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0 CD4QFjAB&url=http%3A%2F%2Fdownload.microsoft.com%2Fdownload%2F6 %2F8%2FF%2F68F6C057-7ED4-440C-81A9E289AACFB3DA%2FDublinDataCentreCasestudy_FINAL.pdf&ei=m9g3T6mEK2a1AWokdCLAg&usg=AFQjCNHXxQdj8Z1YQgxmaohod1W32LVCng&s ig2=mykQX52SKvWg1y98aQPhNA

92 Miller, C. (11, 3 10). Energy Efficiency Guide: Data Center Temperature. Retrieved 2 12, 12, from Data Center Knowledge: http://www.datacenterknowledge.com/archives/2011/03/10/energy-efficiencyguide-data-center-temperature/ Miller, R. (2009, 4 1). Google’s Custom Web Server, Revealed. Retrieved 9 11, 2011, from Data Center Knowledge: http://www.datacenterknowledge.com/archives/2009/04/01/googles-custom-webserver-revealed/ Miller, R. (2009, 4 1). Google’s Custom Web Server, Revealed. Retrieved 9 11, 2011, from Data Center Knowledge: http://www.datacenterknowledge.com/archives/2009/04/01/googles-custom-webserver-revealed/ Miller, R. (2009, 4 15). Telehouse to Heat Homes at Docklands. Retrieved 10 2, 2011, from Data Center Knowledge: http://www.datacenterknowledge.com/archives/2009/04/15/telehouse-to-heathomes-at-docklands/ Miller, R. (2010, 7 14). Data Centers With No UPS or Generator? Retrieved 10 19, 2010, from Data Center Knowledge: (http://www.datacenterknowledge.com/archives/2010/07/14/data-centers-with-noups-or-generator Miller, R. (2010, 4 29). Iron Mountain’s Energy Efficient Bunker. Retrieved 10 2, 2011, from Data Center Knowlwdge: http://www.datacenterknowledge.com/ironmountains-energy-efficient-bunker/

93 Miller, R. (2011, 5 26). Chiller-less Data Center is Google’s Top Performer. Retrieved 2 12, 12, from Data Center Knowledge: http://www.datacenterknowledge.com/archives/2011/05/26/chiller-less-datacenter-is-googles-top-performer/ National Renewable Energy Laboratory. (2011). Best practices guide for energy-efficient data center design. Office of Energy Efficiency and Renewable Energy, U.S. Department of Energy. Niccolai, J. (2010, 2 05). Energy Star certification for data centers coming in June. Retrieved 9 11, 2011, from InfoWorld: http://www.infoworld.com/d/greenit/energy-star-certification-data-centers-coming-in-june-372 Open Compute Project. (n.d.). Energy Efficiency. Retrieved 11 12, 2011, from Open Compute Project: http://opencompute.org/about/energy-efficiency/ Other World Computing. (2012, 1 5). About OWC.net. Retrieved from OWC.net: http://www.owc.net/about.php Other World Computing. (n.d.). About OWC.net. Retrieved 4 13, 2014, from OWC.net: http://owc.net/about.php Other World Computing. (n.d.). OWC Timeline. Retrieved 4 12, 2014, from Other World Computing: http://eshop.macsales.com/owcpages/timeline/ OWC. (n.d.). Windpower. Retrieved 2 12, 12, from Think Green: eshop.macsales.com/green/wind.html Park, J. (2011, 4 7). Specs & Designs. Retrieved 10 25, 2011, from http://opencompute.org/: http://opencompute.org/projects/mechanical/

94 Park, J. (2011, 4 7). Specs & Designs. Retrieved 10 25, 2011, from Open Compute: http://opencompute.org/projects/mechanical/ Pelley, S., Meisner, D., Zandevakili, P., Wenisch, T., & Underwood, J. (2010, 3). er Routing: Dynamic Power Provisioning in the Data Center. ASPLOS '10: Proceedings of the fifteenth edition of ASPLOS on Architectural support for programming languages and operating systems (pp. 231-242). New York: ACM. Poniatowski, M. (2010). Foundations of Greeen IT: consoliation, virtualization, efficency, and ROI in the data center. Boston: Prentice Hall. PR Newswire. (2013, 8 27). Other World Computing Named To Inc. 5000 "FastestGrowing Privately Owned Companies in America" List For Seventh Consecutive Year. Retrieved 4 12, 2014, from PR Newswire: http://www.prnewswire.com/news-releases/other-world-computing-named-to-inc5000-fastest-growing-privately-owned-companies-in-america-list-for-seventhconsecutive-year-221359611.html Pratt, A., Kumar, P., & Aldridge, T. (n.d.). Evaluation of 400V DC Distribution in Telco and Data Centers to Improve Energy Efficiency. Telecommunications Energy Conference, 2007. INTELEC 2007. 29th International (p. 33). IEEE Explore. Rasmussen, N. (n.d.). Allocating Data Center Energy Costs and Carbon to IT Users. Retrieved 9 28, 11, from APC by Schnider Electric: www.apcmedia.com/salestools/NRAN-7WVU54_R0_EN.pdf Rasmussen, N. (n.d.). Allocating Data Center Energy Costs and Carbon to IT Users. Retrieved 9 28, 11, from APC by Schnider Electric: www.apcmedia.com/salestools/NRAN-7WVU54_R0_EN.pdf

95 Rawson, A., Pfleuger, J., & Cader, T. (2008). WP#06-The Green Grid Data Center Power Efficiency Metrics: PUE and DCiE. Retrieved 21 4, 2012, from The Green Grid: http://www.thegreengrid.org/~/media/WhitePapers/White_Paper_6__PUE_and_DCiE_Eff_Metrics_30_December_2008.pdf?lang=en Rogoway, M. (11, 4 8). Facebook unveils Prineville data center design, pledges to collaborate on efficiency . Retrieved 10 11, 11, from OregonLive.com: http://blog.oregonlive.com/business_impact/print.html?entry=/2011/04/facebook_ post.html Sarti, P. (2011, 4 7). Battery Cabinet Hardware v1.0. Retrieved 10 11, 2011, from Battery Cabinet: http://opencompute.org/projects/battery-cabinet/ Sarti, P. (2011, 4 7). Battery Cabinet Hardware v1.0. Retrieved 10 11, 2011, from Open Compute: http://opencompute.org/projects/battery-cabinet/ Schonfeld, E. (2009, 4 30). Facebook Gets Three Times More Efficient At Finding Photos In Its Humungous Haystack . Retrieved 4 12, 12, from TechCrunch: http://techcrunch.com/2009/04/30/facebook-gets-three-times-more-efficient-atstoring-photos-with-haystack/ Shankland, S. (2009, 4 1). Google uncloaks once-secret server. Retrieved 10 4, 2011, from CNET News Business Tech: http://news.cnet.com/8301-1001_3-1020958092.html Shankland, S. (2009, 4 1). Google uncloaks once-secret server. Retrieved 1 5, 2012, from CNET: http://news.cnet.com/8301-1001_3-10209580-92.html

96 Shankland, S. (2009, 4 1). Google uncloaks once-secret server. Retrieved 10 4, 2011, from CNET News Business Tech: http://news.cnet.com/8301-1001_3-1020958092.html Sharma, N., Barker, S., Irwin, D., & Shenoy, P. (n.d.). Blink: Managing Server Clusters on Intermittent Power. ASPLOS '11 Proceedings of the sixteenth international conference on Architectural support for programming languages and operating systems (pp. 185-198). New York: ACM. Spangler, R. (2010, 6 2). Smart Approaches to Free-Cooling in Data Centers. Retrieved 10 2, 2011, from Data Center Knowledge: http://www.datacenterknowledge.com/archives/2010/06/02/smart-approaches-tofree-cooling-in-data-centers/ Stross, R. (n.d.). "Planet Google": One Company's Audacious Plan to Organize Everything. Free Press, p. 82. The Casmus Group, Inc. (2010). Energy Savings from Energy Star-Qualified Servers. Energy Star, U.S. Environmental Protection Agency. The Green Grid. (n.d.). The Green Grid Opportunity Decereasing Datacenter and Other IT Energy Usage Patterns. Retrieved from http://www.thegreengrid.org/~/media/WhitePapers/Green_Grid_Position_WP.pdf ?lang=en Torres, J. (2011, 1). The Unstoppable Transformation (revolution) of IT Sector . Retrieved 2 22, 2012, from Jordi Torres: http://www.jorditorres.org/2011/01/20/the-unstoppable-transformationrevolution-of-the-it-sector/

97 Tsirogiannis, D., Harizopoulos, S., & Shah, M. (2010). Analyzing the energy efficiency of a database server. Proceedings of the 2010 international conference on Management of data (SIGMOD '10) (pp. 231-242). New York: ACM. U.S. Green Building Council. (n.d.). What LEED Measures. Retrieved 9 11, 2011, from U.S. Green Building Council: http://www.usgbc.org/DisplayPage.aspx?CMSPageID=1989 U.S. Green Building Council. (n.d.). What LEED Measures. Retrieved 9 11, 2011, from U.S. Green Building Council: http://www.usgbc.org/DisplayPage.aspx?CMSPageID=1989 Vaid, K. (2014, 1 27). Microsoft Contributes Cloud Server Specification to Open Compute Project . Retrieved 4 24, 2014 Vajgel, P. (2009, 4 30). Needle in a haystack: efficient storage of billions of photos. Retrieved 4 12, 12, from Facebook Engineering: https://www.facebook.com/note.php?note_id=76191543919&ref=mf

APPENDICES

98 Appendix A: Dell hardware configuration Red Hat Linux 5.7 Service Tag Computer Model Shipping Date Country

Parts Number UR033 HP810 FG027 2260R 1212T 0R215 NP544 HM430 GG460 7260R 5R203 YC585 JJ379 WW126 4D175 WX072 GP879 F9541 MJ048 TX235

45BVYD1 PowerEdge 1950 10/22/2007 United States Appendix B: Quantity Description 1 Printed Wiring Assy, Planar Server, Server Chassis, Dell Computer Corporation, PE1950, G1 1 Processor, 80556K, Xeon Clovertown, X5365, LGA771, G0 1 Card, Backplane, Key, TOE, 2PORT Enterprise Systems Group 0 PREPARATION MATERIAL..., DEVIATION..., SERVICE CHARGE..., INCRS #1 0 INFORMATION..., TERMS AND CONDITIONS 1 Cord, Power, 15A, 125V, 10, 5-15/C13 1 Guide, Product, Information Poweredge/powervault, DAO/BCC 1 TECHNICAL SHEET..., NETWORK..., ETHERNET..., BROADCOM CORPORATION..., DOPP 1 Kit, Strain Relief, Cable, Power 0 PREPARATION MATERIAL..., DEVIATION..., SERVICE CHARGE..., INCRS #2 0 INFORMATION..., FLYER, DOCUMENTATION... 1 GUIDE..., GETTING STARTED..., P1950, DAO/BCC 1 Kit, Cable, Dell Remote Assistant Card, P29/1950 1 Printed Wiring Assy, ControllerDRAC5, PE, Mid-Life Kicker 1 Cord, Power, 125V, 10 Feet, 2TO1, SJT... 1 Assembly, Card(Circuit), PERC5I Serial Attached SCSI, 1950, 2950 2 Hard Drive, 146G, Serial Attached SCSI, 3, 10K, 3.5, SGT3 Timberland 10 2 ASSEMBLY..., CARRIER..., HARD DRIVE..., SERIAL ATTACHED SCSI..., UNIVERSAL..., 1IN 1 Kit, Documentation On Floppy Disk, Transmission Control Protocol Off-Load Engine 1 Kit, Documentation On Compact Disk, Document Object Model V5.2, World Wide

99 WY363 U7824

1 1

H7511 UN441 JH879

1 1 1

J7846

1

TX846

1

UM142

8

RY466

1

FC023 HY104 0R215 JC867 HP810

1 1 1 1 1

ASSEMBLY..., CHASSIS..., 3.5HD, 1U, PE1950, II Printed Wiring Assy, Backplane Server, Server Chassis DELL1950, 3.5SASX2 Assembly, Carrier, Blank, Hard Drive, Universal, 1IN, 2 KIT..., Rack Rail, Rapid/Versa Rail1U, Slide, P1950, V4 Assembly, Printed Wiring Assy Riser, Center, Multi Sheet Inserter, PEX950 Printed Wiring Assy, Riser Server, Dell Computer Corporation, PE1950, PERIPHERAL COMPONENT INTERCONNECT EXPRESS ..., LEFT... Assembly, Cable, Controller SAS, POWEREDGE EXPANDABLE RAID CONTROLLER NUMBER... Dual In-Line Memory Module, 2G 667M, 256X72, 8, 240, 2RX4 Assembly, Cdrw/dvd, 12.7MM Hitachi Lg Data Storage, Black Assembly, Bezel, 1U, PE1950 Power Supply, 670W, Redundant Artesyn Cord, Power, 15A, 125V, 10, 5-15/C13 Assembly, Heatsink, Central Processing Unit, PE1950 Processor, 80556K, Xeon Clovertown, X5365, LGA771, G0

Windows 2003 R2, x64, SP2 Service Tag Computer Model Shipping Date Country Parts Number UR033 HP810 FG027 2260R 1212T 0R215 NP544 HM430

D5BVYD1 PowerEdge 1950 10/22/2007 United States

Quantity Description 1 Printed Wiring Assy, Planar Server, Server Chassis, Dell Computer Corporation, PE1950, G1 1 Processor, 80556K, Xeon Clovertown, X5365, LGA771, G0 1 Card, Backplane, Key, TOE, 2PORT Enterprise Systems Group 0 PREPARATION MATERIAL..., DEVIATION..., SERVICE CHARGE..., INCRS #1 0 INFORMATION..., TERMS AND CONDITIONS 1 Cord, Power, 15A, 125V, 10, 5-15/C13 1 Guide, Product, Information Poweredge/powervault, DAO/BCC 1 TECHNICAL SHEET..., NETWORK..., ETHERNET..., BROADCOM CORPORATION..., DOPP

100 GG460 7260R

1 0

5R203 YC585 JJ379 WW126

0 1 1 1

4D175 WX072

1 1

GP879

2

F9541

2

MJ048

1

TX235

1

WY363 U7824

1 1

H7511 UN441 JH879

1 1 1

J7846

1

TX846

1

UM142

8

RY466

1

FC023 HY104 0R215 JC867 HP810

1 1 1 1 1

No operating system: BIOS

Kit, Strain Relief, Cable, Power PREPARATION MATERIAL..., DEVIATION..., SERVICE CHARGE..., INCRS #2 INFORMATION..., FLYER, DOCUMENTATION... GUIDE..., GETTING STARTED..., P1950, DAO/BCC Kit, Cable, Dell Remote Assistant Card, P29/1950 Printed Wiring Assy, ControllerDRAC5, PE, Mid-Life Kicker Cord, Power, 125V, 10 Feet, 2TO1, SJT... Assembly, Card(Circuit), PERC5I Serial Attached SCSI, 1950, 2950 Hard Drive, 146G, Serial Attached SCSI, 3, 10K, 3.5, SGT3 Timberland 10 ASSEMBLY..., CARRIER..., HARD DRIVE..., SERIAL ATTACHED SCSI..., UNIVERSAL..., 1IN Kit, Documentation On Floppy Disk, Transmission Control Protocol Off-Load Engine Kit, Documentation On Compact Disk, Document Object Model V5.2, World Wide ASSEMBLY..., CHASSIS..., 3.5HD, 1U, PE1950, II Printed Wiring Assy, Backplane Server, Server Chassis DELL1950, 3.5SASX2 Assembly, Carrier, Blank, Hard Drive, Universal, 1IN, 2 KIT..., Rack Rail, Rapid/Versa Rail1U, Slide, P1950, V4 Assembly, Printed Wiring Assy Riser, Center, Multi Sheet Inserter, PEX950 Printed Wiring Assy, Riser Server, Dell Computer Corporation, PE1950, PERIPHERAL COMPONENT INTERCONNECT EXPRESS ..., LEFT... Assembly, Cable, Controller SAS, POWEREDGE EXPANDABLE RAID CONTROLLER NUMBER... Dual In-Line Memory Module, 2G 667M, 256X72, 8, 240, 2RX4 Assembly, Cdrw/dvd, 12.7MM Hitachi Lg Data Storage, Black Assembly, Bezel, 1U, PE1950 Power Supply, 670W, Redundant Artesyn Cord, Power, 15A, 125V, 10, 5-15/C13 Assembly, Heatsink, Central Processing Unit, PE1950 Processor, 80556K, Xeon Clovertown, X5365, LGA771, G0

101 Service Tag Computer Model Shipping Date Country

Parts Number FG027 HP810 UR033 0R215 1212T 2260R 5R203 7260R GG460 HM430 NP544 YC585 JJ379 WW126 HP810 JC867 0R215 4D175 HY104 WX072 TX846 J7846 JH879 UN441 TX235

GR8VYD1 PowerEdge 1950 10/22/2007 United States

Quantity Description 1 Card, Backplane, Key, TOE, 2PORT Enterprise Systems Group 1 Processor, 80556K, Xeon Clovertown, X5365, LGA771, G0 1 Printed Wiring Assy, Planar Server, Server Chassis, Dell Computer Corporation, PE1950, G1 1 Cord, Power, 15A, 125V, 10, 5-15/C13 0 INFORMATION..., TERMS AND CONDITIONS 0 PREPARATION MATERIAL..., DEVIATION..., SERVICE CHARGE..., INCRS #1 0 INFORMATION..., FLYER, DOCUMENTATION... 0 PREPARATION MATERIAL..., DEVIATION..., SERVICE CHARGE..., INCRS #2 1 Kit, Strain Relief, Cable, Power 1 TECHNICAL SHEET..., NETWORK..., ETHERNET..., BROADCOM CORPORATION..., DOPP 1 Guide, Product, Information Poweredge/powervault, DAO/BCC 1 GUIDE..., GETTING STARTED..., P1950, DAO/BCC 1 Kit, Cable, Dell Remote Assistant Card, P29/1950 1 Printed Wiring Assy, ControllerDRAC5, PE, Mid-Life Kicker 1 Processor, 80556K, Xeon Clovertown, X5365, LGA771, G0 1 Assembly, Heatsink, Central Processing Unit, PE1950 1 Cord, Power, 15A, 125V, 10, 5-15/C13 1 Cord, Power, 125V, 10 Feet, 2TO1, SJT... 1 Power Supply, 670W, Redundant Artesyn 1 Assembly, Card(Circuit), PERC5I Serial Attached SCSI, 1950, 2950 1 Assembly, Cable, Controller SAS, POWEREDGE EXPANDABLE RAID CONTROLLER NUMBER... 1 Printed Wiring Assy, Riser Server, Dell Computer Corporation, PE1950, PERIPHERAL COMPONENT INTERCONNECT EXPRESS ..., LEFT... 1 Assembly, Printed Wiring Assy Riser, Center, Multi Sheet Inserter, PEX950 1 KIT..., Rack Rail, Rapid/Versa Rail1U, Slide, P1950, V4 1 Kit, Documentation On Compact Disk, Document Object

102

MJ048

1

F9541

2

GP879

2

H7511 U7824

1 1

WY363 FC023 RY466

1 1 1

UM142

8

Model V5.2, World Wide Kit, Documentation On Floppy Disk, Transmission Control Protocol Off-Load Engine ASSEMBLY..., CARRIER..., HARD DRIVE..., SERIAL ATTACHED SCSI..., UNIVERSAL..., 1IN Hard Drive, 146G, Serial Attached SCSI, 3, 10K, 3.5, SGT3 Timberland 10 Assembly, Carrier, Blank, Hard Drive, Universal, 1IN, 2 Printed Wiring Assy, Backplane Server, Server Chassis DELL1950, 3.5SASX2 ASSEMBLY..., CHASSIS..., 3.5HD, 1U, PE1950, II Assembly, Bezel, 1U, PE1950 Assembly, Cdrw/dvd, 12.7MM Hitachi Lg Data Storage, Black Dual In-Line Memory Module, 2G 667M, 256X72, 8, 240, 2RX4

103 Appendix B: Experiment Readings

Experimental data is available in electronic format at http://works.bepress.com/heatherbrotherton/9/.

VITA

104

VITA

Heather Brotherton Experience Web Systems Administrator, Purdue University May 2012 – Present West Lafayette, IN • Define projects and create articulate design rational to meet customer's business requirements • Project planning and management • Develop and update documentation of administration policies and procedures • Grant and implement development and deploy access to web developers • Migrate and create websites in our Apache, IIS, and ColdFusion environments • Build proof of concept prototype implementations • Perform ETL for internal data in MS SQL databases • Evaluate, architect, implement, and maintain weblog analytics Research Assistant, College of Computer and Information Technology, Purdue University September 2011 – Present West Lafayette • Initiated a relationship with Facebook that lead to the donation of over 100 servers and funding for the development of course to support the Open Compute Challenge team. • Led and managed the project to install over 100 servers donated by Facebook in an on campus data center. • Proposed and led initiative to create the Open Compute Challenge and corresponding course. • Proposed and designed energy efficient data center course being presented at Purdue University. • Designed energy efficient data center coursework to be presented at West Point. • Researching energy efficient data center design and management techniques • Researching "cloud" design to exclusively utilize intermittent renewable resources. • Mentor master's level students • Assisted on Department of Energy Smart Grid grant project and coursework

105 High Performance Compute Systems-Graduate Assistant, Purdue University October 2012 – January 2013 West Lafayette Implement proof of concept Intel data center management product. Web and Applications Administrator, Purdue University May 2009 – August 2011 • Design, development, and implementation of the Applications Administration website and forms using XHTML, JavaScript, CSS and PHP • Develop and update documentation of administration policies and procedures • Grant and implement development and deploy access to web developers • Migrate and create websites in our ColdFusion environment • Customize RightAnswers portals using, XML, JavaScript, HTML, CSS • Create Tivoli Storage Manager nodes • Meet with customers to define project requirements and create an articulate design rational that best meets requirements • Assisted with SharePoint training development • Build Tomcat web server • Research BMC Remedy web form development and implementation Social Insurance Specialist/Site LAN Coordinator, Social Security Administration April 1999 – January 2009 • Serve as Site LAN Coordinator for my office. My duties include: verifying systems updates, reporting systems problems, changing daily backup tapes, resolving systems issues by making necessary changes on site. • Prepare and perform presentations to special interest groups • Employ creativity and problem solving to deal effectively with situations of competing or conflicting priorities • Exercise professionalism and discretion in handling confidential information • Analyze, interpret, and implement policy and balance tasks in a fast paced work environment • Learn new policies, tools and technology on a daily basis to keep up with constantly changing workloads • Assume responsibility for maintaining quality standards in processing claims • Work both independently and as a team member to meet our office goals • Responded to congressional inquiries, succinctly explained public policy to attorneys and clients, prepared fraud cases for prosecution Education PhD, Technology, Purdue University 2011 – 2014 (expected) Research focus: Data center energy efficiency

106

MS, Computer Information Technology, Purdue University Research focus: data center disaster recovery best practices BA, Public Relations-Communication; Purdue University 1995 – 1998

2009 – 2011

Publications Energy overhead of the graphical user interface in server operating systems Series, ACM November 2013 Evidence of graphical user interface server operating system energy overhead is presented. It is posed that data centers would have substantial energy savings by eliminating graphical user interface operating systems. SIGUCCS '13 Proceedings of the 2013 ACM annual conference on Special interest group on university and college computing services; Pages 65-68 ISBN13: 978-1-4503-2318-5; DOI: 10.1145/2504776.2504781 Disaster Recovery and Business Continuity Planning: Business Justification, Journal of Emergency Management May 2010 The purpose of this article is to establish the need for disaster recovery and business continuity planning for information systems. Today’s infrastructure and economic dependence upon information technology is highlighted as a basis for the requirement of disaster recovery and business continuity planning. This planning is stressed as a basic business requirement for any reputable information systems operation. The unique needs of information systems as well as the general background for the subject of disaster recovery and business continuity planning considerations are discussed. Contingency contracts, failover locations, and testing are recommended along with communication protocols. Pages 57-60 DOI: 10.5055/jem.2010.0019 Data Center Business Continuity Best Practice, IEEE This qualitative multiple case study analysis reviews well documented past information technology disasters with a goal of identifying disaster recovery and business continuity best practices. The topic of cyber infrastructure resiliency is explored including barriers to cyber infrastructure resiliency. Factors explored include: adherence to established procedures, staff training in recovery procedures, chain of command structure, recovery time and cost, and mutual aid relationships. Each of these factors is analyzed in the four cases included in the study and recommendations are presented. Automated fail over and regular disaster recovery drills were found to be key factors for success in actual disaster situations. (Accepted for publication)