Green Power Management with Dynamic Resource ... - IEEE Xplore

4 downloads 3257 Views 763KB Size Report
physical machines with command mode. Keywords- Cloud Computing; Virtual Machine; Green Power. Management (GPM); Dynamic Resource Allocation (DRA).
2011 IEEE International Conference on High Performance Computing and Communications

Green Power Management with Dynamic Resource Allocation for Cloud Virtual Machines Chao-Tung Yang

Kuan-Chieh Wang

Hsiang-Yao Cheng

Cheng-Ta Kuo

William Cheng C. Chu

Department of Computer Science, Tunghai University, Taichung 40704, Taiwan R.O.C. e-mail: [email protected], [email protected] , [email protected], [email protected] , [email protected] loading becomes too high, it will automatically migrated to another low loading physical machine without service interrupting. And let total physical machine loading reaching balance. Cloud and Virtualization not only accelerate the data center building, but also bring the possibility of green energy. When the data center application based on a generic virtual machine, allowing the application workload combined in a smaller number of virtual machines, which can help more efficient use of computer resources. If workload size could allocate in different resources depending on time and space, it could improve energy efficiency and avoid wasting resource [25, 26, 27, 29]. And the provision of these services is actually quite energy-intensive; especially when the server is running at low utilization with idle waste resources which are caused by the energy efficiency of data centers. Even in very low load10% CPU utilization, the total power consumption is still more than 50% at the peak. Similarly, if a disk, network, or any such resource becomes a bottleneck, it will increase the waste of other resources. “Green” has recently become a hot key word and we aimed our topic at it and proposed powerful management approaching to virtualization technology. This paper is organized as follows. First, in Section II, we introduce the background and related works. Section III will describe the system design and more details of the entire system. Section IV will show the experimental environment and results, and finally, section V will outline some main conclusions and future work.

Abstract—With the development of electronics in governments and business, the implementation of these services are increasing demand for servers. Continued expansion of servers represents our need for more space, power, air conditioning, network, human resources and other infrastructure. Regardless of how powerful servers now become, we do not make good use of all resources and strive for the waste. In this paper, the Green Power Management (GPM) is proposed for load balancing for virtual machine management on cloud. It includes three main phrases: (1) supporting green power mechanism, (2) implementing virtual machine resource monitor onto OpenNebula with web-based interface, and (3) integrating a Dynamic Resource Allocation (DRA) and OpenNebula functions as bases instead of traditionally booting physical machines with command mode. Keywords- Cloud Computing; Virtual Machine; Green Power Management (GPM); Dynamic Resource Allocation (DRA)

I.

INTRODUCTION

Today’s x86 computer hardware was designed to run a single operating system and a single application, leaving most machines vastly underutilized. Virtualization Technology lets you run multiple virtual machines on a single physical machine, with each virtual machine sharing the resources of that one physical computer across multiple environments [1, 3, 5, 6, 8, 12, 13, 14, 15, 22]. It is current issue to achieve the goal of management multiple virtualization platform and multiple virtual machine migration across physical machine without disruption. We have to face and discuss that ensure load balance when multiple virtual machine run on multiple physical machine. Setting up virtual machine cluster environment on physical machine can provide stable service [20], but this environment often includes unpredictable workloads. Currently most systems of virtual machine are loading balanced statically, as the systems where the load changes dynamically over time run it is inevitable that some physical hosts with higher load, so for throughput and response time of a system to be maximized it is necessary for load to be distributed to each part of the system in proportion to their computing/IO capacity [3]. In this paper, we present a system which is implementation of optimization with Dynamic Resource Allocation (DRA) dealing with virtualization machines on physical machines. And practice DRA method is running on Xen and OpenNebula [2, 4, 8, 15, 19, 21, 23, 24] in this work. The results confirmed that the virtual machine which

978-0-7695-4538-7/11 $26.00 © 2011 IEEE DOI 10.1109/HPCC.2011.103

II.

BACKGROUND REVIEW AND RELATED WORK

A. Virtualization Virtualization is simply the logical separation to the request for some services from the physical resources where the service is actually provided. In practical terms, virtualization provides the ability to run applications, operating systems, or system services in a logically distinct system environment that is independent of a specific physical computer system. Obviously, all of these have to be running on a certain computer system at any given time, but virtualization provides a level of logical abstraction that liberates applications, system services, and even the operating system that supports them from being tied to a specific piece of hardware. Virtualization, focusing on logical operating environments but physical ones, makes applications, services, and instances of an operating system 726

portable across different physical computer systems. Virtualization is able to execute applications under many operating systems, manage IT more efficiently, and allot resources of computing with other computers [2]. Virtualization gets hardware to imitate much hardware via Virtual Machine Monitor, and each one of virtual machines can be seen as a complete individual unit. For a virtual machine, there are memories, CPUs, unique complete hardware equipment, etc. It can run any operating systems named as Guest OS without affecting other virtual machines. In general, most virtualization strategies fall into one of two major categories: Full virtualization (also called native virtualization) is similar to emulation. As in emulation, unmodified operating systems and applications run inside a virtual machine. Full virtualization differs from emulation in that operating systems and applications are designed to run on the same architecture as the underlying physical machine. This allows a full virtualization system to run many instructions directly on the raw hardware. The hypervisor in this case monitors access to the underlying hardware and gives each guest operating system the illusion of having its own copy. It no longer has to use software to simulate a different basic architecture as shown in Figure 1.

Para-virtualization such as full virtualization uses a hypervisor and virtual machine, the term refers to its virtualized operating systems as well. However, unlike full virtualization, Para-virtualization requires changes to the virtualized operating system. This allows the VM to coordinate with the hypervisor and reduces the use of the privileged instructions that are typically responsible for the major performance penalties in full virtualization. The advantage is that Para-virtualized virtual machines typically outperform fully virtualized virtual machines. The disadvantage, however, is the need to modify the Paravirtualized virtual machine or operating system to be hypervisor-aware. The framework of Para-virtualization is shown in Figure 2.

Figure 2. The architecture of para-virtualization

In order to evaluate viability of the differences between virtualization and non-virtualization, the virtualization software we used in this paper is Xen. Xen is a virtual machine monitor (hypervisor) that allows you to use one physical computer to run many virtual computers — for example, a web server application and a test server run on the same physical machine or Linux and Windows run simultaneously. Although virtualization system is not the only one which is available, Xen has a combination of features to make it uniquely well suited for many important applications. Xen runs on commodity hardware platforms and is open source as well. Xen is fast and scalable. And it provides server-class features such as live migration. Xen is chosen to be our system’s virtual machine monitor because it provides better efficiency, supports different operating system work simultaneously, and gives each operating system an independent system environment.

Figure 1. The architecture of full virtualization

Para-virtualization, the hypervisor exports a modified version of the underlying physical hardware. The exported virtual machine is of the same architecture, which is not necessarily the case in emulation. Instead, targeted modifications are introduced to make it simpler and faster to support multiple guest operating systems. For example, the guest operating system might be modified to use a special hyper called application binary interface (ABI) instead of using certain architectural features that would normally be used. This means that only small changes are typically required in the guest operating systems, but any changes makes it difficult to support closed-source operating systems that are only distributed in binary form, such as Microsoft Windows. As in full virtualization, applications are still in run without modifications.

B. OpenNebula OpenNebula is a virtual infrastructure engine that enables the dynamic deployment and re-allocation of virtual machines in a pool of physical resources. OpenNebula system extends the benefits of virtualization platforms from a single physical resource to a pool of resources, decoupling the server, not only from the physical infrastructure but also from the physical location [4]. OpenNebula contains one frontend and multiple backend. The front-end provides users with access interfaces and management functions. The back-

727

ends are installed on Xen servers, where Xen hypervisors are started and virtual machines could be backed. Communications between frontend and backend employ SSH. OpenNebula gives users a single access point to deploy virtual machines on a locally distributed infrastructure. OpenNebula orchestrates storage, network, virtualization, monitoring, and security technologies to enable the dynamic placement of multi-tier services (groups of interconnected virtual machines) on distributed infrastructures, combining both data center resources and remote cloud resources, according to allocation policies [4]. The architecture of OpenNebula can be described as Figure 3. Live migration is the movement of a virtual machine from one physical host to another while continuously powered-up. When this process properly carried out, it takes place without any noticeable effect from the end user’s point of views. Live migration allows an administrator to take a virtual machine offline for maintenance or upgrading without subjecting the system's users to downtime. When resources are virtualized, additional management of VMs is needed to create, terminate, clone or move VMs from host to host. Migration of VMs can be done off-line (the guest in the VM is powered off) or on-line (live migration of a running VM to another host). One of the most significant advantages of live migration is the fact that it facilitates proactive maintenance. If an imminent failure is suspected, the potential problem can be resolved before service disruption. Live migration can also be used for load balancing, in which work is shared among computers in order to optimize the utilization of available CPU resources.

physical hosts (2) the Capacity Manager governs the functionality provided by the OpenNebula core. The capacity manager adjusts the placement of VMs based on a set of predefined policies (3) Virtualizer Access Drivers. In order to provide an abstraction for the underlying virtualization layer, OpenNebula uses pluggable drivers that expose the basic functionality of the hypervisor [5]. C. Eucalyptus Eucalyptus [9, 23] also belongs to virtual machine management platform. It is an open-source cloud-computing framework that uses computational and storage infrastructure commonly available to academic research group to provide a platform that is modular and open to experimental instrumentation and study [24]. The architecture of the Eucalyptus system is simple, flexible and modular with a hierarchical design reflecting common resource environments found in many academic settings. In essence, the system allows users to start, control, access, and terminate entire virtual machines using an emulation of Amazon EC2’s SOAP and “Query” interfaces. That is, users of Eucalyptus interact with the system using the exact same tools and interfaces that they use to interact with Amazon EC2.As shown in Figure 4, Eucalyptus consist of three parts [24]. • Node Controller: controls the execution, inspection, and terminating of virtual machine instances on the host where it runs. • Cluster Controller: gathers information about and schedules virtual machine execution on specific node controllers, as well as manages virtual instance network. • Storage Controller (Walrus): is a put/get storage service that implements Amazon’s S3 interface, providing a mechanism for storing and accessing virtual machine images and user data. Cloud Controller: is the entry-point into the cloud for users and administrators. It queries node managers for information about resources, makes high level scheduling decisions, and implements them by making requests to cluster controllers. The Eucalyptus system is built to allow administrators and researchers the ability to deploy an infrastructure for user-controlled virtual machine creation and control atop existing resources. The system is highly modular, with each module represented by a well-defined API, enabling researchers to replace components for experimentation with new cloud computing solutions. OpenNebula and Eucalyptus are major open-source cloud computing software platforms. The overall function of these systems is to manage the provisioning of virtual machines for a cloud providing Infrastructure-as-a-Service. These various open-source projects provide an important alternative for those who do not wish to use commercially provided cloud. In table 1 provide a table to compare and analyze these systems [25].

Figure 3. OpenNebula architecture

However, OpenNebula lacks a GUI management tool. In pervious works we build virtual machines on OpenNebula and implemented Web-based management tool. Thus, the system administrator can easily monitor and manage the entire OpenNebula System on our project. OpenNebula is composed of three main components: (1) the OpenNebula Core is a centralized component that manages the life cycle of a VM by performing basic VM operations, and also provides a basic management and monitor interface for the

728

Even Internet-based services have been operating for many years, service offerings have recently expanded in network-based storage and network-based computing. These new services are being offered to both corporate and individual end users. J. Baliga considered both public and private clouds and included energy consumption in switching and transmission as well as data processing and data storage. After his analysis, he found the number of users per server is the most significant determinant to energy efficiency of a cloud software service [6]. In power management area, Z. Wu and J. Wang presented a control framework of tree distribution for power management in cloud computing so that power budget can be better managed based on workload or service types [27]. In this paper, we focus on power management allocation on physical machines with virtual machines. And we presented a green power management mechanism for this. For more detail please see the next section.

Figure 4. Eucalyptus architecture

III.

D. Related Works Recently, the dramatic performance improvements in hypervisor technologies have made it possible to experiment with virtual machines (VM) as basic building blocks for flexible computational platforms. Many research efforts have been introduced to reduce the overhead of the networking in virtualized environments. Jae-Wan Jang [31] use virtualized parallel and distributed computing systems are rapidly becoming the mainstream due to the significant benefit of high energy-efficiency and low management cost. Processing network operations in a virtual machine incurs a lot of overhead from the arbitration of network devices between virtual machines, inherently by the nature of the virtualized architecture. L. Wang et al. [33] propose a new methodology for Grid computing – to use virtual machines as computing resources and provide Virtual Distributed Environments (VDE) for Grid users. Paul Willmann [32] presents hardware and software mechanisms to enable concurrent direct network access by operating systems running within a virtual machine monitor. It is declared that employing virtual environment for Grid computing can bring various advantages, for instance, computing environment customization, quality of service guarantee and easy management. A light weight Grid middleware, Grid Virtualization Engine, is developed accordingly to provide functions of building virtual environment for Grids. VMware DRS [31, 32] achieves an on-demand resource scheduling scheme for virtual machine cluster via migrating virtual machines among physical machines. In our scheme, the two measures are used simultaneously while reallocating resource of virtual machines within same physical machine is the first choice to get higher efficiency. Additionally, R. S. Montero [7] proposed a performance model to characterize these variable capacity (elastic) cluster environments. The model can be used to dynamically dimension the cluster using cloud resources, according to a fixed budget, or to estimate the cost of completing a given workload in a target time.

SYSTEM DESIGN AND IMPLEMENTATION

A. Dynamic Resource Allocation The purpose of our proposed Dynamic Resource Allocation (DRA) is to reach the best balance between each physical machine. To avoid computing resources centralized on some specify physical machines, how to balance the resources becomes the most important issue. To achieve the maximum efficiency the resource must be evenly distributed [1]. DRA manages resources allocation to a set of virtual machines running on a cluster hosts to the goal of fair and effective use of resources. Virtual machine placement and migration recommendations serve to enforce resource-based service level agreements, user-specified constraints, and loading balance maintenance across the cluster as workloads change, as shown in Figure 5. Load balancing seeks to improve the performance of a distributed system by allocating the workload amongst a set of cooperating hosts. Such system may attempt to ensure the workload on each host is within a small tolerance of the workload on all other physical hosts, or may attempt to avoid congestion of individual servers. Load balancing can be either centralized or distributed [29].

Figure 5. Global load balancing algorithm concept

729

B. Green Power Management There are several different issues; first of all, the merger must be carefully considered as combination of different workloads under common physical suitability of the host. Therefore, in order to determine which components of critical workloads can be packaged together, understanding the nature of the work is rather important. Second, there are problems of a performance and energy optimization because they can cause performance degradation and lead to increased execution time which eats up the energy derived from the lower idle energy savings. In addition, there are several problems affecting the integration, including the behavior of servers and workloads, the performance from the implementation of change and the optimal combination of different applications which accept the optimal solution not to interrupt the work load in order to keep tracks with changes. All problems above become important integration of energy efficiency [5]. Our Green Power Management (GPM) saves power by dynamically right-sizing cluster capacity according to workload demands. It recommends evacuation and powering of hosts when CPU is lightly utilized. GPM recommends powering hosts should back on when either CPU utilization increases appropriately or additional host resources are needed to meet user-specified constraints. GPM executes DRA in a what-if mode to ensure its host power recommendations are consistent with the cluster constraints and objectives being managed by DRA. Hosts powered of by GPM are marked in standby mode, indicating that they are available to be powered on whenever needed. GPM can be awakened from a powered-of (ACPI S5) state via wake-onLAN (WOL) packets. WOL packets are sent the by front-end host in the cluster, so GPM at least keeps one host powered on at all times.

Figure 6. System architecture

D. Implementation First, DRA defines an ideal ratio. The average loading should be equal, and the summary of virtual machine loading is divided by booted hosts. Next, average loading is used to compare with each loading of virtual machine on hosts. Finally, we migrate the highest loading of virtual machine to the lowest ones. DRA can be regarded as a virtual machine load balance. GPM algorithm archives energy saving which based on the load balance. GPM algorithm is as follow: : Sum of HOSTi loading ratio, there HOSTi is calculation is as follows: available for allocation,

C. System Architecture Besides managing individual VMs’ life cycle, we also designed the core to support services deployment; such services typically include a set of interrelated components (for example, a Web server and database back end) requiring several VMs. Thus, we can treat a group of related VMs as a first-class entity in OpenNebula. Besides managing the VMs as a unit, the core also handles the context information delivery (such as the Web server’s IP address, digital certificates, and software licenses) to the VMs [8]. In Figure 6, it shows the system architecture perspective. According to the previous works, we build a cluster system with OpenNebula and also provide a web interface to manage virtual machines and physical machine. Our cluster system was built up with four homogeneous computers; the hardware of these computers is equipped with Intel i7 CPU 2.8 GHz, four gigabytes memory, 500 gigabytes disk, Debian operating system, and the network connected to a gigabit switch.

λ: maximum tolerance ratio of loading. β: minimum critical ratio of loading. : Among of the available hosts for allocation, once is calculated as follows: CPU usage is the minimum.

Suppose there are n virtual machines and is greater than the λ, it shows the loading on physical machine is too much, and GPM will wake a new a host and apply the DRA is smaller than β. to do load balancing. If the Resource utilization expressed most of the time is idle state, so one of the booted hosts needs to be turned off. GPM mechanism will decide which one should be shut down. Once the target host has been determined, the virtual machines on target ones would migrate averagely to the others host, then shut down the target host to attain the purpose of energy saving.

730

There are three physical machines in our first experimental environment. We created six virtual machines and distributed on different host machine. Each virtual machine used one virtual CPU and 512MB virtual memory. High workloads virtual machine on HOST 1 will be migrated to a lower resource cost physical machine when DRA function is enabled. Figures 9 and 10 are shown the experiment results. DRA disable was red line, DRA enable is blue one. Figure 9 is shown HPCC computing time. The horizontal axis represented HPCC problem size and the vertical axis represented HPCC computing time. We noticed that while HPCC problem size growing up, the difference of HPCC computing finished time when DRA function enable or not will be more obviously. In this experiment, we run HPCC programs onto six virtual machines and calculate HPCC performance on these six virtual machines. It caused virtual machines cluster CPU usage jumped and affect HOST machine CPU usage relatively. When DRA function disable, virtual machines located on same HOST machine and proceeding HPCC computing simultaneously. It caused virtual machines snatch at physical resource each other. When DRA function enable, it will detect all host machine resource usage was balancing or not, therefore, virtual machines on same HOST machine were migrated to others automatically. Figure 10 is also shown DRA function effectiveness. The vertical axis represented virtual machine floating point calculation performance. With DRA function enabled will obtain good performance. It also proved our thesis is workable under this circumstance. In Figure 10, it shows better performance when virtual machines centralized on the same host than on distributed hosts. Because HPCC performance computing on virtual machines cluster transfer computing data to each virtual machine, so these virtual machines deliver message to each other by the host virtual switch. But we observed that when problem size reach 6000, DRA enabled virtual machines distributed to different hosts, the HPCC performance is better than DRA disabled virtual machines. Because problem size is too big, so virtual machines cluster on the single host cannot afford the computation.

E. Management Interface We design a useful web interface for end users and it is indeed the fastest and the friendliest to Implementation virtualization environment. Figure 7 shows the authorization mechanism, through the core of the web-based management tool, it controls and manages both physical machine and VM life-cycle.

Figure 7. Web-based Interface

The entire web-based management tool includes physical machine management, virtual machine management and performance monitor. In Figure 8, it sets the VM attributes such as memory size, IP address, root password and VM name, etc. It includes the life migrating function as well. Life migration means VM can move to any working physic machine without suspending in-service programs. Life Migration is one of the advantages of OpenNebula. Therefore, we could migrate any VM what we want under any situation because we have a DRA mechanism to make the migration function more meaningfully.

Figure 8. Virtual Machines Manager

IV.

EXPERIMENTAL RESULTS

First of all, we focus on resource utilization of computing under DRA model. Therefore, we used HPCC [11, 16, 17, 20] software to verify that DRA has a good performance and utilization on virtualization cluster. HPCC is an abbreviation of High Performance Computing Challenge, the HPC Challenge Benchmark is a set of benchmarks targeting to test multiple attributes that can contribute substantially to the real-world performance of HPC systems [31].

731

Figure 9. Executation time for running HPCC on VM

machines would shut down to save energy. When in the period of 10 AM to 16 PM, GPM was aware of VM CPU demand exceeding over a single physical machine can supply. For the reason, front-end wake up another machine by WOL technology and load balance automatically. System is able to power or shut down the physical machine according to computing demand because we effectively achieve the purpose of saving energy.

Figure 10. The performance of Floating Point per second

Furthermore, we built up application servers, including the computing service, teaching website, multi-media services contain compression and decompression of media files on virtual environment. All the services are composed of four physical machines, and they are all on power distribution unit (PDU). A PDU is a device fitted with multiple appliance outlets designed to distribute electric power. It continuously monitors instant wattage consumption for four physical machines (Figure 11). We use RRDTool [10] to draw the picture. We could observe that changes in wattage over 400 at least. The four physical machines as OpenNebula clients opened the total of four VM from above. Each VM provide an application service. In Figure 12, within one month, record the four VM CPU averagely total usages per hour. X-axis is time interval (hours), and Y-axis is the four VM CPU total usage. Here we use the SNMP protocol record the VM CPU usage per hour, then discover between 2 AM to 7 AM, CPU was at lower utilization, but 10 AM to 16 PM is relatively high. Consequently, VM needs more physical resources in the interval between 10 AM to 16 PM.

Figure 12. VM CPU usage

Figure 13. Power Consumptin

V.

CONCLUSIONS

In this work, we have presented an optimization with green power management model with a dynamic resource allocation method for virtualization platform which allows a flexible management on cloud computing platforms. Our research work includes (1) supporting GPM mechanism (2) implementation Resource Monitor on OpenNebula webbased interface and (3) based on DRA and OpenNebula advantage instead of booting physical machines with schedule traditionally. Moreover, we expect to improve violent CPU highly loading solution. Because our goal is to assume a prefect smooth virtual machines changes but dramatic changes. For instance, sensitivity parameters can be set for entire mechanism. Eventually, under our GPM

Figure 11. Power monitor information

Figure 13 has the same measuring period as Figure 11, it averagely records total power consumption per hour. X-axis is to time (hours) as Y-axis is to the power consumption total of the four physical machines (watts). The illustration shows that GPM is turned off in the case; four machines are always in power. The power consumption has been over the 400W (Diamond marker), but if we contrast 2 AM to 7 AM in Figure 11, VM CPU demand volume is relatively small. The decision-making is based on GPM- front-end would migrate VM to the same physical machine, and other physical

732

approach, it certainly reached the significant goal of energy saving than traditional approach.

[18]

ACKNOWLEDGMENT This work is supported in part by the National Science Council, Taiwan R.O.C., under grants no. NSC 99-2622-E029-001-CC2, NSC 99-2218-E-029-001, and NSC 1002218-E-029-004.

[19]

REFERENCES [1]

[2] [3]

[4] [5]

[6]

[7]

[8]

[9] [10] [11] [12]

[13]

[14]

[15]

[16]

[17]

[20]

Chao-Tung Yang, et al., “A Dynamic Resource Allocation Model for Virtual Machine Management on Cloud,” in Symposium on Cloud and Service Computing 2011. https://sites.google.com/site/2011sc2/program W. Hagen, Professional Xen Virtualization, Wrox Press Ltd. Birmingham, UK, 2008. Chao-Tung Yang, Chien-Hsiang Tseng, Keng-Yi Chou, and ShyhChang Tsaur, “A Virtualized HPC Cluster Computing Environment on Xen with Web-based User Interface,” The second International Conference on High Performance Computing and Applications, HPCA 2009, Lecture Notes in Computer Science, vol., 5938, pp. 503–508, Springer, Shanghai, China, August 10-12, 2009. OpenNebula. http://www.opennebula.org R. Moreno-Vozmediano, et al., “Elastic management of cluster-based services in the cloud,” ACDC '09 Proceedings of the 1st workshop on Automated control for datacenters and clouds, ed. Barcelona, Spain: ACM, 2009, pp. 19-24. J. A. Baliga, et al., “Green Cloud Computing: Balancing Energy in Processing, Storage, and Transport,” Proceedings of the IEEE, vol. 99, pp. 149-167, Jan. 2011. Ruben S. Montero, et al., “An elasticity model for High Throughput Computing clusters,” Journal of Parallel and Distributed Computing, Volume 71 Issue 6, pp.750-757, June, 2011. Borja Sotomayor, Ruben S. Montero, Ignacio M. Llorente, Ian Foster, “Virtual Infrastructure Management in Private and Hybrid Clouds,” IEEE Internet Computing, vol. 13, no. 5, pp.14-20, 2009. Eucalyptus. http://open.eucalyptus.com RRDtool, http://www.mrtg.org/rrdtool/ HPCC. http://icl.cs.utk.edu/hpcc/ S. Soltesz, H. Potzl, M. E. Fiuczynski, A. Bavier, and L. Peterson, “Container-based Operating System Virtualization: A Scalable, Highperformance Alternative to Hypervisors,” EuroSys '07, in the Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007, pp. 275–287. H. Raj and K. Schwan, “High Performance and Scalable I/O Virtualization via Self-Virtualized Devices,” in the Proceedings of the 16th international symposium on High performance distributed computing, HPDC 2007, pp. 179-188, 2007. K. Adams and O. Agesen, “A Comparison of Software and Hardware Techniques for x86 Virtualization,” in the Proceedings of the 12th international conference on Architectural support for programming languages and operating systems. New York, NY,USA: ACM Press, 2006, pp. 2–13. W. Emeneker and D. Stanzione, “HPC Cluster Readiness of Xen and User Mode Linux,” in the Proceedings of 2006 IEEE International Conference on Cluster Computing, pp. 1-8, 2006. C. Huang, G. Zheng, S. Kumar, and L. V. Kal´e, “Performance Evaluation of Adaptive MPI,” in the Proceedings of ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming 2006, pp. 12-21, March 2006. F. Wong, R. Martin, R. Arpaci Dusseau, and D. Culler, “Architectural Requirements and Scalability of the NAS Parallel Benchmarks,” in the Proceedings of the 1999 ACM/IEEE conference on

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30] [31]

[32]

[33]

733

Supercomputing (CDROM). New York, NY, USA: ACM Press, 1999, p. 41. Yaozu Dong, Shaofan Li, Asit Mallick, Jun Nakajima, Kun Tian, Xuefei Xu, Fred Yang, and Wilfred Yu, “Extending Xen with Intel Virtualization Technology,” Intel Technology Journal, vol. 10, no. 3, pp. 193-204, 2006. P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris,A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, “Xen and the Art of Virtualization,” in the Proceedings of the 19th ACM symposium on Operating Systems Principles, New York, USA, ACM Press, 2003, pp. 164–177. Dave Turner and Xuehua Chen, “Protocol-Dependent MessagePassing Performance on Linux Clusters,” in the Proceedomgs of Cluster 2002 conference, Chicago, pp. 187 – 194, September 25 2002. Arun Babu Nagarajan, Frank Mueller, Christian Engelmann, Stephen L. Scott, “Proactive fault tolerance for HPC with Xen virtualization,” in Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington. Patrícia Takako Endo, Glauco Estácio Gonçalves, Judith Kelner, Djamel Sadok, “A Survey on Open-source Cloud Computing Solutions,” in the Proceedings of VIII Workshop em Clouds, Grids e Aplicações, pp.3-16, 2010. Xiantao Zhang, Yaozu Dong, “Optimizing Xen VMM Based on Intel Virtualization Technology,” 2008 International Conference on Internet Computing in Science and Engineering (ICICSE 2008), pp. 367-374, 2008 International Conference on Internet Computing in Science and Engineering, 2008. Hitoshi Oi and Fumio Nakajima, “Performance Analysis of Large Receive Offload in a Xen Virtualized System,” in the Proceedings of 2009 International Conference on Computer Engineering and Technology (ICCET 2009), vol. 1, pp. 475–480, Singapore, January 2009. Z. Hai, et al., “An Approach to Optimized Resource Scheduling Algorithm for Open-Source Cloud Systems,” in ChinaGrid Conference (ChinaGrid), 2010 Fifth Annual, 2010, pp. 124-129. Ruay-Shiung Chang and Chia-Ming,Wu “Green virtual networks for cloud computing,” in Communications and Networking in China (CHINACOM), 2010 5th International ICST Conference on, 2010, pp. 1-7. Z. Wu and J. Wang, “Power Control by Distribution Tree with Classified Power Capping in Cloud Computing,” in Green Computing and Communications (GreenCom), 2010 IEEE/ACM Int'l Conference on and International Conference on Cyber, Physical and Social Computing (CPSCom), 2010, pp. 319-324. S. Figuerola, et al., “Converged Optical Network Infrastructures in Support of Future Internet and Grid Services Using IaaS to Reduce GHG Emissions,” Lightwave Technology, Journal of, vol. 27, pp. 1941-1946, 2009. S. Srikantaiah, et al., “Energy aware consolidation for cloud computing,” in the Proceedings of the 2008 conference on Power aware computing and systems, San Diego, California, 2008. Resource Management with VMware DRS [Online]. E. S. Jae-Wan Jang, Heeseung Jo, Jin-Soo Kim, “A low-overhead networking mechanism for virtualized high-performance computing systems,” The Journal of Supercomputing, 2010. http://www.springerlink.com/content/ux0021p054xg1k04/ J. S. Paul Willmann, David Carr, Aravind Menon, Scott Rixner, Alan L. Cox and Willy Zwaenepoel, “Concurrent Direct Network Access for Virtual Machine Monitors,” in the Proceedings of the second International Conference on High Performance Computing and Applications, HPCA, 2007, pp. 306–317. L. Wang et al., “Provide Virtual Distributed Environments for Grid computing on demand,” Advances in Engineering Software, Volume 41 Issue 2, pp. 213-219, February, 2010.