IEEE SEMI-THERM Symposium 2012 - IEEE Xplore

0 downloads 0 Views 873KB Size Report
Abstract. Understanding and improving the thermal management and energy efficiency of data center cooling systems is of growing importance from a cost and ...
Experimental Investigation of Water Cooled Server Microprocessors and Memory Devices in an Energy Efficient Chiller-less Data Center *Pritish R. Parida1, Milnes David2, Madhusudan Iyengar2, Mark Schultz1, Michael Gaynes1, Vinod Kamath3, Bejoy Kochuparambil3 and Timothy Chainer1 1

IBM T. J. Watson Research Center, Yorktown Heights, NY 2 IBM Systems & Technology Group, Poughkeepsie, NY 3 IBM Systems & Technology Group, Raleigh, NC *[email protected]

Abstract Understanding and improving the thermal management and energy efficiency of data center cooling systems is of growing importance from a cost and sustainability perspective. Toward this goal, warm liquid cooled servers were developed to enable highly energy efficient chiller-less data centers that utilize only “free” ambient environment cooling. This approach greatly reduces cooling energy use, and could reduce data center refrigerant and make up water usage. In one exemplary experiment, a rack having such liquid cooled servers was tested on a hot summer day (~32 ºC) with CPU exercisers and memory exercisers running on every server to provide steady heat dissipation from the processors and from the DIMMs, respectively. Compared to a typical air cooled rack, significantly lower DIMM temperatures and CPU thermal values were observed. Keywords Data-center, Chiller-less data centers, energy efficient, liquid cooled servers. Nomenclature BMC CPU DIMM DTS GPM HVAC IPMI LPM MWU PECI RPM -

Baseboard Management Controller Central Processing Unit Dual Inline Memory Module Digital Thermal/Temperature Sensor Gallons per minute Heating Ventilation and Air Conditioning Intelligent Platform Management Interface Liter per minute Modular Water Unit Platform Environment Control Interface Revolutions per minute

1. Introduction Exponential growth in the demand for data processing and storage continues to stimulate rapid growth in the U.S. data center industry. In 2005, server driven power usage amounted to 1.2% of total US energy consumption [1]. Over the past six years, energy use by these centers and their supporting infrastructure is estimated to have increased by nearly 100 percent [2]. In the face of growing global energy demand, uncertain energy supplies, and volatile energy prices,

978-1-4673-1111-3/12/$31.00 ©2012 IEEE

innovative solutions are needed to radically advance the energy efficiency of these data center systems. Information Technology (IT) equipment usually consumes about 45-55% of the total electricity in a data center, and total cooling energy consumption is roughly 25-30% of the total data center energy use [2, 3]. In addition to significant energy usage, traditional data center cooling also results in refrigerant and make up water consumption. Thus, understanding and improving the thermal management and energy efficiency of data center cooling systems is of growing importance from a cost and sustainability perspective. Only recently, efforts have been made to identify the sources of energy inefficiencies and to figure out best practices to improve the efficiencies of these capital and energy intensive data centers [3-8]. Efficiency improvements naturally center on these four highly inter-dependent areas – (i) IT equipment and software, (ii) power supply chain and/or back up power, (iii) cooling and, (iv) overall data center design including allowable operational temperature and humidity ranges for IT equipment [2-5]. Energy efficiency gains that may be achieved in the IT equipment and software will decrease the demand for power, and therefore, the demand for cooling. Also, a larger temperature and humidity operating range would enable longer hours of water/air side economizer mode operation with ambient “free cooling” resulting in reduced cooling power consumption. In terms of cooling design improvement, using liquid at the rack or server level is a far more efficient method of transferring concentrated heat loads than using air, due to much higher volumetric specific heats and higher heat transfer coefficients. For the present study, warm-liquid cooling infrastructure for the server components was developed along with a dual enclosure air/liquid cooling system to allow “free” cooling from the outdoor ambient environment. A rack having 38 such liquid cooled servers was tested on a hot summer day (~32 ºC) with CPU exercisers and memory exercisers running on each server to provide steady heat dissipation from the processors and from the DIMMs, respectively. This paper discusses the server thermal data for that run. Moreover, a one to one comparison with a typical air cooled server was performed to quantify the benefit in terms of improvement in server electronics temperatures and server power consumption. .

224

28th IEEE SEMI-THERM Symposium

CPU 2

#11 #15 #12 #17 #14 #18

#5 CPU 1 #8 #3 #9 #6 #2 Fan 12 Fan 8 Fan 7

Fan 11

Fan 4 Fan 3

(a) (b) Figure 1. (a) Schematic of the volume server with node liquid cooling loop and other server components. (b) Node liquid cooling loop, having liquid cooling components for both the processors (CPU 1 and CPU 2) and the 12 DIMMs (numbered 2 through 18), installed in an IBM System X volume server.

Pre MWU Coolant Temperature Sensor

Pre Rack Coolant Temperature Sensor

Outside Indoor Pump

Outdoor Heat Rejection Exchanger

MWU

Rack Heat Extraction Exchanger

Liquid-toLiquid Heat

Outdoor Pump

Outdoor Air Temperature Sensor

Inside Indoor Coolant Loop

Outdoor Coolant Loop

Figure 2. Schematic of the chiller-less data center liquid cooling design.

2. Data Center Liquid Cooling Design Warm liquid cooled servers as shown in Figure 1 were developed to enable highly energy efficient chiller-less data

Parida, Experimental Investigation of Water Cooled Servers …

centers that utilize only “free” ambient environment cooling. This approach greatly reduces cooling energy use, and could reduce data center refrigerant and makeup water use.

28th IEEE SEMI-THERM Symposium

(a)

(b)

(c) Figure 3. Server component data showing (a) hottest core PECI/DTS numbers for CPU 1 and CPU 2, (b) DIMMs temperature for each of the 12 DIMMs and (c) system fans rpm for one of the server from a sample 22 hours run.

Figure 1 (b) shows an IBM System X 3550 M3 1U volume server with a node liquid cooling loop having liquid cooling components for both the micro-processors and the Dual In-line Memory Modules (DIMMs). The microprocessor modules are cooled using cold plate structures while the DIMMs are cooled by attaching them to a pair of conduction spreaders which are then bolted to a cold rail that has water flowing through it. The loops were designed, modeled and characterized using computational fluid dynamics modeling tools. Every server consisted of two, six core 3.3 GHz micro-processors and 12 8GB DDR3 DIMMs from two memory suppliers which are herein referred to as Supplier1 and Supplier2. DIMMs were installed in slot numbers 2, 3, 5, 6, 8, 9, 11, 12, 14, 15, 17 and 18. The microprocessors and DIMMs have a typical maximum power of 130 W and 6 W, respectively. The modifications to the server included removing three of the Parida, Experimental Investigation of Water Cooled Servers …

six server fan packs resulting in less IT power at elevated temperature operation. Each server fan pack consists of two coaxial fans – one counter clockwise-rotating and the other clockwise-rotating. Other server components such as the power supply, hard-disk drives and other miscellaneous components were air-cooled. These partially water cooled servers were designed to accept as high as 45 ºC water and 50 ºC air into the node. The modified servers were installed in a Rack Heat Extraction Exchanger (server rack with liquid cooling manifolds and Side Car air-to-liquid heat exchanger) to completely remove heat at the rack level either by direct thermal conduction or indirect air to liquid heat exchange. The air flow inside the rack enclosure was provided by the server fans. The liquid coolant was circulated between the Rack Heat Extraction Exchanger and an Outdoor Heat Rejection Exchanger to move the heat from the System X servers to the outdoor ambient air environment. A liquid-to-liquid heat exchanger was used to transfer the heat 28th IEEE SEMI-THERM Symposium

from the indoor Rack Heat Extraction Exchanger loop to the Outdoor Heat Rejection Exchanger loop. Further details on this concept data center design and associated hardware build can be found in the reference articles [9-10]. Figure 2 shows the schematic of the chiller-less data center liquid cooling design that was developed as a part of thisstudy. The Rack Heat Extraction heat exchanger along with the Modular Water Unit or MWU (liquid-to-liquid heat exchanger and one of the pumps) was installed inside the building. The outdoor heat rejection unit and the other pump were installed outside the building. Since the refrigeration based components are completely eliminated in the present approach, the air and the water temperatures entering the server are closely related to the outdoor ambient conditions. Different workloads such as CPU exerciser, Memory exerciser and Linpack were executed on the servers to provide continuous and steady heat dissipation from the processors and DIMMs and to characterize the system performance. Component information such as processor PECI/DTS value (Platform Environment Control Interface / Digital Thermal Sensor value – which indicates the difference between the current processor core temperature and maximum junction temperature) [11], DIMM temperatures, system fan rpm and other such information

was collected using the IPMI (Intelligent Platform Management Interface) and BMC (Baseboard Management Controller) tools. 3. Day Long Operation of Data Center Test Facility The data center test facility was continuously run for a day (~ 22 hours) with varying outdoor heat rejection exchanger fan speeds and internal and external loop coolant flow rates set to 7.2 GPM (27.2 LPM) and 7.1 GPM (27 LPM) respectively. The outdoor heat exchanger fans were programmed to linearly vary in speed from 169 RPM to 500 RPM as the pre-MWU temperature varied from 30°C to 35°C. For pre-MWU temperatures below 30°C the fans run at a constant speed of 169 RPM. Figure 3 shows (a) the hottest core PECI/DTS values for each CPU, (b) maximum DIMM temperature for each of the 12 DIMMs and (c) the rpm of the system fans for one of the servers during the sample 22 hours run that began and ended in the afternoons of successive days. Observations such as variation of 5-6 ºC in the DIMM temperatures and CPU 2 running relatively cooler than CPU 1, were as expected based on the computational fluid dynamic simulations of the cooling loop. The rpm of the server fans changes predominantly based on the server inlet air temperature. The more normal rpm changes due to load driven processor temperature rise were eliminated as even under full power the processors were running below the temperatures which would normally cause processor driven fan rpm increases.

Figure 4. Variation of temperature from the outdoor air to the server components.

Parida, Experimental Investigation of Water Cooled Servers …

28th IEEE SEMI-THERM Symposium

(a) CPU 1 @ t = 12.4 hrs (cooler)

(b) CPU 1 @ t = 20.7 hrs (warmer)

(c) CPU 2 @ t = 12.4 hrs

(d) CPU 2 @ t = 20.7 hrs

(e) Max DIMM Temperature @ t = 12.4 hrs

(f) Max DIMM Temperature @ t = 20.7 hrs

Figure 5. Frequency distribution of CPU 1 and CPU 2 PECI/DTS numbers and maximum DIMM temperatures at t = 12.4 hrs (cooler) and t = 20.7 hrs (warmer) from the 22 hours test run.

Figure 4 shows the outdoor air temperature, the preMWU and pre-Rack coolant temperatures for the same 22hour run. It is to be noted here that because the internal and external loop coolant flow rates are kept constant through the sample run, the temperature delta between the preParida, Experimental Investigation of Water Cooled Servers …

MWU and pre-Rack temperature remains constant. Also, when the pre-MWU temperature is less than 30 ºC, the outdoor heat exchanger fans run at constant rpm causing the temperature delta between the outdoor ambient temperature and the pre-MWU temperature to remain constant. However, when the pre-MWU 28th IEEE SEMI-THERM Symposium

exceeds 30 ºC the outdoor heat exchanger fans starts to ramp up causing a drop in the temperature delta between the outdoor air temperature and the pre-MWU temperature. Hence, over the duration where the pre-MWU temperature is less than 30 ºC, temperatures at all the locations of the cooling system and of the cooled electronics (that is, the pre-MWU, the pre-Rack, microprocessors junction temperature, DIMMs temperature, etc.) follow the outdoor ambient temperature profile at an essentially fixed offset. Figure 4 also shows the hottest DIMM temperature (DIMM 17 for this server) and the hottest core estimated temperature for each CPU. In the absence of a direct calibration between PECI/DTS values and absolute temperature, we choose to approximate the hottest CPU core temperature as 100 minus the absolute value of the PECI/DTS number.There were 38 servers in the rack with CPU exercisers and memory exercisers running on every server to provide steady heat dissipation from the processors and from the DIMMs. Average PECI/DTS for the hottest core in CPU 1 was -43.5 with the max/min values of -36.7/-50.5. Average hottest DIMM (#17 for this server) temperature was 53 ºC with the max/min values of 55 ºC/50 ºC. All the other servers in the rack showed similar temperatures, PECI/DTS values and fan rpm profiles. From Figure 4, it can also be seen that the minimum temperature occurs around 12.4 hours and the maximum temperature occurs around 20.7 hours. Frequency distributions of the CPU PECI/DTS numbers and maximum DIMM temperatures at these time instances were evaluated and are presented in Figure 5. The mean maximum CPU1 core PECI/DTS number at time = 12.4 hrs was -50 and at 20.7 hrs was -42.1 with a standard deviation of 1.92 and 1.74 respectively. The mean maximum CPU2 core PECI/DTS number at 12.4 hrs was -51.6 and at 20.7 hrs was -43.7 with a standard deviation of 1.53 and 1.36 respectively. The variability in the PECI/DTS numbers can be attributed to the general variability in the performance of each core in a micro-processor. PECI/DTS numbers of each core of each processor were also recorded and evaluated to characterize this core-to-core and processorto-processor variability. The mean maximum DIMM temperature at 12.4 hrs was 47.2 ºC and at 20.7 hrs was 53.4 ºC. Note that the variability in the DIMM temperatures is mainly due to the different types of DIMMs. All the servers that reported relatively cooler DIMMs had 8GB DDR3 DIMMs from Supplier 1 while all the servers that reported relatively warmer DIMMs had 8GB DDR3 DIMMs from Supplier 2. This is consistent with the observation that Supplier 1 DIMMs dissipate less heat than Supplier 2 DIMMs for similar performance. Overall, the system was operated for 22 hours at an average outdoor temperature of 23.8 ºC and max/min temperatures of 32 ºC and 19 ºC. The average IT power was 13.14 kW and average cooling power was 0.44 kW or roughly 3.5% of the IT power. In comparison, HVAC air cooled data centers can have cooling powers of 60% of IT power [3]. Typical outdoor air dry bulb temperature Parida, Experimental Investigation of Water Cooled Servers …

distribution for a number of US cities such as Poughkeepsie, NY can be found at the NREL website [12]. According to this distribution, the outdoor air temperature in Poughkeepsie, NY stays below 25 ºC for more than 93% of the year. Upon extrapolating the current result from the 22 hour run where the average outdoor temperature was 23.8 ºC, a data center in Poughkeepsie, NY based on the present approach could potentially be cooled using less than 3.5% of the IT Power. 4. Comparison with Typical Air Cooled Server Although the above results validate the well known fact that liquid cooling at the server level is a far more efficient method of transferring concentrated heat loads than air, a one to one comparison with a typical air cooled server is required to quantify the benefit in terms of improvement in temperatures and server power consumption. For that purpose, the thermal data of one of the liquid cooled nodes was compared against the thermal data of its air cooled version. The water flow rate through the liquid cooled servers was maintained at 0.7 lpm. Figure 6 shows the comparison of the estimated junction temperature of the two processors for a liquid cooled server with an air cooled server cooled by air at 22 ºC. For the liquid cooled server, two cases were considered – one with 45 ºC server inlet water temperature (and 50 ºC server inlet air temperature) and the other with 25 ºC server inlet water temperature (and 30 ºC server inlet air temperature). Figure 6(a) summarizes the estimated junction temperature comparison when each processor was exercised at 90% while Figure 6(b) summarizes the estimated junction temperature comparison when the memory exerciser was executed. In both the cases, liquid cooled microprocessors showed much lower junction temperatures even with warm liquid coolant. Note that the 45 ºC water temperature might be an extreme condition for many parts of the world and even for that condition the microprocessors were at least five PECI/DTS units cooler than the typical air cooled servers. For the memory exerciser case, although the heat dissipation from the processors is not much, the difference in the estimated junction temperature is higher. This is because the server fans are running at a lower rpm consuming lower power (see Figure 8(b)). Figure 7 shows the comparison of the DIMM temperatures for the liquid cooled server with an air cooled server cooled by air at 22 ºC. Here again, 25 ºC and 45 ºC server inlet water temperature cases were considered. Figure 7(a) summarizes the DIMM temperature comparison when each processor was exercised at 90% while Figure 7(b) summarizes the DIMM temperature comparison when the memory exerciser was executed. Note that the DIMMs in slots 2, 3, 5, 6, 8 and 9 are closer to the fans and are cooled by relatively cooler air while the DIMMs in slots 11, 12, 14, 15, 17 and 18 are away from the fans and are cooled by relatively warmer air due to preheat from DIMMs in the front bank. When only the processors are exercised (Figure 7(a)), the heat dissipation from the DIMMs is very small and thus the DIMM temperatures are closer to the server inlet air temperature (for air cooled server) or server inlet water temperature (for liquid cooled servers). In such cases, the benefit of going to liquid cooling for the DIMMs is negligible. However, when the memory modules are exercised, the benefit 28th IEEE SEMI-THERM Symposium

of going to liquid cooling becomes prominent. In some cases, DIMMs of a warm liquid cooled server might show lower temperatures than those shown by the DIMMs of a typical air cooled server. Figure 8 shows the comparison of the server power and server fan power consumption for the liquid cooled server with an air cooled server cooled by air at 22 ºC. Here again, 25 ºC and 45 ºC server inlet water temperature cases were considered. Figure 8(a) summarizes the server power

and server fan power comparison when each processor was exercised at 90% while Figure 8(b) summarizes the server power and server fan power consumption comparison when the memory exerciser was executed. Figure 8(a) shows that the total server power goes up when the server is cooled with a 45 ºC warm water and 50 ºC server inlet air. Most of this increase in the power is due to the increased power consumption by the server fans as the server sees a 50 ºC inlet air temperature. If we

Estimated Junction Temperature (100 - |PECI/DTS|) Air Cooled (air @ 22 C) Memory Exerciser

Liq Cooled (water @ 45 C) Liq Cooled (water @ 25 C)

90 80 70 60 50 40 30 20 10 0

Estimated Temperature

Estimated Temperature

Estimated Junction Temperature (100 - |PECI/DTS|) Air Cooled (air @ 22 C) CPU Exerciser 90%

CPU 1

CPU 2

Liq Cooled (water @ 45 C) Liq Cooled (water @ 25 C)

90 80 70 60 50 40 30 20 10 0 CPU 1

CPU Module

CPU 2

CPU Module

(a) (b) Figure 6. Comparison of estimated junction temperature for a liquid cooled server with a typical air cooled server (a) when the CPUs are exercised at 90% and (b) when the memory modules are exercised. Air Cooled (air @ 22 C) Liq Cooled (water @ 45 C) Liq Cooled (water @ 25 C)

80 70 60 50 40 30 20 10 0

Air Cooled (air @ 22 C) Liq Cooled (water @ 45 C) Liq Cooled (water @ 25 C)

DIMM Temperature Memory Exerciser 80 70 Temperature (C)

Temperature (C)

DIMM Temperature CPU Exerciser 90%

60 50 40 30 20 10

2

3

5

6

8

9

11

12

14

15

17

0

18

2

3

5

DIMM @ Slot #

6

8

9

11

12

14

15

17

18

DIMM @ Slot #

(a) (b) Figure 7. Comparison of DIMM temperatures for a liquid cooled server with a typical air cooled server (a) when the CPUs are exercised at 90% and (b) when the memory modules are exercised. Server Power Consumption CPU Exerciser 90%

Power (Watts)

400 300 200 100 0

Power (Watts)

Total Power

Server Power minus Fan Power

400 350 300 250 200 150 100 50 0

Fan Power

60 40 20 0

Fan Power

Total Power Power (Watts)

Power (Watts)

500

Server Power Consumption Memory Exerciser

Air Cooled (air @ 22 C) Liq Cooled (water @ 45 C) Liq Cooled (water @ 25 C)

Server Power minus Fan Power

Air Cooled (air @ 22 C) Liq Cooled (water @ 45 C) Liq Cooled (water @ 25 C)

Fan Power

60 40 20 0

Fan Power

(a) (b) Figure 8. Comparison of server power and fan power consumption for a liquid cooled server with a typical air cooled server (a) when the CPUs are exercised at 90% and (b) when the memory modules are exercised.

Parida, Experimental Investigation of Water Cooled Servers …

28th IEEE SEMI-THERM Symposium

subtract that fan power from the total power, we see that the power consumed by the server electronics is lower than that consumed by the server electronics of a typical air cooled server. This reduction in the server electronics power consumption becomes more prominent for the 25 ºC water cooled server where a more than 6% reduction in power consumption was observed. This reduction in power could possibly be due to the reduction in leakage power as the liquid cooled electronics were running at much lower temperatures. For the memory exerciser case, this reduction in power consumption was observed to be greater than 11% where the improvement in estimated junction temperature was ~40 units. Between the 25 ºC and 45 ºC inlet water temperature cases, this reduction was greater than 5.5% and 2.5% for the CPU exerciser and memory exerciser cases respectively. In summary, liquid cooling the servers does provide a significant benefit in terms of lower server electronics temperatures as well as in terms of lower server electronics power consumption. Thus, by going to liquid cooling IT power can be reduced along with the significant reduction in cooling power. 5. Conclusions Warm liquid cooled servers were developed to enable highly energy efficient chiller-less data centers that utilize only “free” ambient environment cooling. This approach greatly reduces cooling energy use, and could reduce data center refrigerant and make up water usage. A rack having 38 such liquid cooled servers was tested on a hot summer day (~32 ºC) with CPU exercisers and memory exercisers running on every server to provide steady heat dissipation from the processors and from the DIMMs. Significantly lower processor PECI/DTS values ~ -40 to -55 (that is, 40 to 55 units away from the maximum junction temperature) and DIMM temperatures ~ 45-60 ºC were observed. Additionally, a one to one comparison with a typical air cooled server showed that by liquid cooling the servers IT power can be reduced along with the significant reduction in cooling power. The anticipated benefits of such energy-centric configurations are significant (~25%) energy savings at the data center level which represents a greater than 90% reduction in the cooling energy usage compared to conventional refrigeration based systems. For a typical one megawatt data center this would represent a savings of roughly $90-$240k/year at an energy cost of $0.04 - $0.11 per kWh. The prototype dual-enclosure liquid-cooling (DELC) technology being characterized in this program will be evaluated to see how to incorporate these developments into a portfolio of leading edge energy efficient technologies. Acknowledgments This project was supported in part by the U.S. Department of Energy's Industrial Technologies Program under the American Recovery and Reinvestment Act of Parida, Experimental Investigation of Water Cooled Servers …

2009, award number DE-EE0002894. Authors of this article would like to thank Robert Rainey from IBM STG Raleigh, NC for his help in extracting the thermal data from the servers. The authors would also like to thank the IBM Poughkeepsie Site and Facilities team for their engineering help in the construction of the data center facility. Technical contribution and insight from David Graybill, Daniel Simco, Robert Simons and George Manelski of IBM STG Poughkeepsie, NY is very much appreciated with respect to their work in the data center and node and rack cooling hardware build. We also thank the IBM Research project managers, James Whatley and Brenda Horton and DOE Project Officer Debo Aichbhaumik, DOE Project Monitors Darin Toronjo and Chap Sapp and DOE HQ Contact Gideon Varga for their support throughout the project. References 1. Brown, et. al., “Report to Congress on Server and Data Center Energy Efficiency”, Public Law 109-431, U.S. Environmental Protection Agency, ENERGY STAR Program, Aug 2, 2007. 2. “Vision and Roadmap: Routing Telecom and Data Centers toward Efficient Energy Use”, US Dept. of Energy, May 13, 2009. 3. Greenberg, et. al., “ Best Practices for Data Centers: Lessons Learned from Benchmarking 22 Data Centers”, ACEEE Summer Study on Energy Efficiency in Buildings, 2006. 4. ASHRAE, “Thermal Guidelines for Data Processing Environments – Expanded Data Center Classes and Usage Guidance”, TC 9.9 Mission Critical Facilities, 2011. 5. ASHRAE Datacom Series 6, “Best Practices for DataCom Facility Energy Efficiency,” 2nd Edition, Altanta, 2009. 6. LBNL, “High-Performance Buildings for High-Tech Industries, Data Centers”. Berkeley, Calif.: Lawrence Berkeley National Laboratory, 2006, http://hightech.lbl.gov/datacenters.html. 7. Moss et. al., “Chiller-less Facilities: They May Be Closer Than You Think”, Dell technical white paper, 2011, http://content.dell.com/us/en/enterprise/d/business~solutions~wh itepapers~en/Documents~chillerless-facilities-whitepaper.pdf.aspx. 8. W. Tschudi, “Best Practices Identified Through Benchmarking Data Centers,” Presentation at the ASHRAE Summer Conference, Quebec City, Canada, June, 2006. 9. M. Iyengar, M. David, P. Parida, V. Kamath, B. Kochuparambil, D. Graybill, M. Schultz, M. Gaynes, R. Simons, R. Schmidt and T. Chainer, “Server Liquid Cooling with Chiller-less Data Center Design to Enable Significant Energy Savings”, IEEE SEMITherm Conference, 2012. 10. M. David, M. Iyengar, P. Parida, R. Simons, M. Schultz, M. Gaynes, R. Schmidt, T. Chainer, 2012, “Experimental Characterization of an Energy Efficient Chiller-less Data Center Test Facility with Warm Water Cooled Servers”, IEEE SEMITherm Conference, 2012. 11. Michael Berktold and Tian Tian, “CPU Monitoring with DTS/PECI”, White paper, http://download.intel.com/design/intarch/papers/322683.pdf 12. “National Solar Radiation Database: Typical Meteorological Year”, http://rredc.nrel.gov/solar/old_data/nsrdb/19912005/tmy3/. 28th IEEE SEMI-THERM Symposium