A Review on Various Resource Allocation Strategies in Cloud ...

17 downloads 1673 Views 345KB Size Report
environment. Keywords— Cloud computing, Infrastructure as a Service. (Iaas), Resource allocation, Virtual machines. I. INTRODUCTION. Cloud computing has  ...
International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 7, July 2013)

A Review on Various Resource Allocation Strategies in Cloud Computing N. Asha1, Dr. G. Raghavendra Rao2 1

Assistant Professor, Department of PG Studies, National Institute of Engineering, Mysore, Karnataka, India Professor & Head, Department of Computer Science & Engineering, National Institute of Engineering, Mysore, Karnataka, India

2

The main cloud computing attributes are pay per use, elastic self provisioning through software, simple scalable services, virtualized physical resources. Models, such as cloud computing based on Virtual technologies enables the user to access storage resources and charge according to the resources access. Cloud computing platforms are based on utility model that enhances the reliability, scalability, performance and need based configurability and all these capabilities are provided at relatively low costs as compared to the dedicated infrastructures. This new model of infrastructure sharing is being widely adopted by the industries. Industries experts predict that cloud Computing has right future in spite of changing technology that faces significant challenge. Cloud computing is a complex distributed environment and it relies heavily on strong algorithms for allocating properly CPU, RAM and hard disk operations to end users and core processes in a mutual and shared system. Here comes the matter of resource accounting and there are two distinct alternatives. The first one is strictly usageoriented where you have a limited number of units. Such units can be connected to CPU and/or Memory usage, time or they can be a compound indicator. This covers generally the idea of utility computing. As a whole it gives some flexibility but it is more expensive in the long term. The second alternative is capacity pre-allocation. In this case there are different plans with predefined constant resources dedicated CPU and Memory. This still gives flexibility to upgrade resources on demand but it also allows lower price for higher resource usage in the long term. When talking about a cloud computing system, it's helpful to divide it into two sections: the front end and the back end. They connect to each other through a network, usually the Internet. Fig 1 gives the pictorial representation of the various sections of the cloud computing system. The front end is the side the computer user, or client, sees. The back end is the "cloud" section of the system. The front end includes the client's computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface.

Abstract— The computational world is becoming very large and complex. Cloud computing has emerged as a new paradigm of computing and also has gained attention from both academic and business community. Its utility-based usage model allows users to pay per use, similar to other public utility such as electricity, with relatively low investment on the end devices that access the cloud computing resources. Cloud users can request/rent resources as they become necessary, in a much more scalable and elastic way. A provision should be made so that all resources are made available to the users to satisfy their need. This paper reviews various resource allocation strategies in a cloud computing environment. Keywords— Cloud computing, Infrastructure as a Service (Iaas), Resource allocation, Virtual machines

I. INTRODUCTION Cloud computing has been established as the most recent and most flexible delivery model of supplying information technology. Cloud computing can be seen as an innovation in different ways. From a technological perspective it is an advancement of computing, which’s history can be traced back to the construction of the calculating machine. While from a technical perspective, cloud computing seems to pose manageable challenges, it rather incorporates a number of challenges on a business level, both from an operational as well as from a strategic point of view. One fundamental advantage of the cloud paradigm is computation outsourcing, where the computational power of cloud customers is no longer limited by their resourceconstraint devices. By outsourcing the workloads into the cloud, customers could enjoy the literally unlimited computing resources in a pay-per-use manner without committing any large capital outlays in the purchase of hardware and software and/or the operational overhead there in. It enables customers with limited computational resources to outsource their large computation workloads to the cloud, and economically enjoy the massive computational power, bandwidth, storage, and even appropriate software that can be shared in a pay-per-use manner. 177

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 7, July 2013) Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients. On the back end of the system are the various computers, servers and data storage systems that create the "cloud" of computing services. Actually a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server. A central server administers the system, monitoring traffic and client demands to ensure everything runs smoothly. It follows a set of rules called protocols and uses a special kind of software called middleware. Middleware allows networked computers to communicate with each other. Most of the time, servers don't run at full capacity.

II. SIGNIFICANCE OF RESOURCE ALLOCATION A Resource allocation [13] is a subject that has been addressed in many computing areas, such as operating systems, grid computing, and datacenter management. A Resource Allocation System (RAS) in Cloud Computing can be seen as any mechanism that aims to guarantee that the applications’ requirements are attended to correctly by the provider’s infrastructure. Along with this guarantee to the developer, resource allocation mechanisms should also consider the current status of each resource in the Cloud environment, in order to apply algorithms to better allocate physical and/or virtual resources to developers’ applications, thus minimizing the operational cost of the cloud environment. Allocation of resources is an important component of cloud computing. Its efficiency will directly influence the performance of the whole cloud environment. It requires the type and amount of resources needed by each application in order to complete a user job. Two players in cloud computing environments, cloud providers and cloud users, pursue different goals; providers want to maximize revenue by achieving high resource utilization, while users want to minimize expenses while meeting their performance requirements. However, it is difficult to allocate resources in a mutually optimal way due to the lack of information sharing between them. Moreover, everincreasing heterogeneity, variability of the environment and uncertainty of resources in the nodes which can not be satisfied with traditional resource allocation poses harder challenges for both parties. Cloud resources can be seen as any resource (physical or virtual) that developers may request from the Cloud. For example, developers can have network requirements, such as bandwidth and delay, and computational requirements, such as CPU, memory and storage. Generally, resources are located in a datacenter that is shared by multiple clients, and should be dynamically assigned and adjusted according to demand. It is important to note that the clients and developers may see those finite resources as unlimited and the tool that will make this possible is the RAS. The RAS should deal with these unpredictable requests in an elastic and transparent way. This elasticity should allow the dynamic use of physical resources, thus avoiding both the under-provisioning and over-provisioning of resources. Since cloud computing has its own features, an optimal RAS should avoid the following criteria as follows:

Fig. 1Cloud Computing System [14]

That means there's unused processing power going to waste. It's possible to fool a physical server into thinking it's actually multiple servers, each running with its own independent operating system. The technique is called server virtualization. By maximizing the output of individual servers, server virtualization reduces the need for more physical machines. If a cloud computing company has a lot of clients, there's likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. Cloud computing systems need at least twice the number of storage devices it requires to keep all its clients' information stored. That's because these devices, like all computers, occasionally break down. A cloud computing system must make a copy of all its clients' information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable. Making copies of data as a backup is called redundancy.

 Resource contention arises when two applications try to access the same resource at the same time.  Scarcity of resource arises when there are limited resources and the demand for resources is high. 178

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 7, July 2013)  Resource fragmentation arises when the resources are isolated. There would be enough resources but cannot allocate it to the needed application due to fragmentation into small entities.  Over provisioning arises when the application gets surplus resources than the demanded one.  Under provisioning of resources occurs when the application is assigned with fewer numbers of resources than it demanded The hardware and software resources are allocated to the cloud applications on-demand basis. For scalable computing, Virtual Machines are rented [1].

Few of the strategies for resource allocation in cloud computing are covered here briefly. A. Ant Colony Optimization Algorithm for Resource Allocation Like the storage space allocation strategy in the memory or cache of a PC, the client requirement of the hardware infrastructure following the content of agreement should be allocated dynamically from the node pool. Since the specific condition of resource is unknown under cloud circumstance, and the networks do not have a fixed topology, the structure and the resource allocation of the whole cloud environment is unpredictable. This means to allocate resources to the user all according to the instant need of the client program instead of the commission described in the agreement, all the time each unit in the cloud computing node cluster uses a Master/Slaves structure as shown in Fig.2. Using Ant colony algorithm we can discover the computing resources in an unknown network topology and select the most appropriate one or more resources to user’s job until it meets user’s requirements. There is a Master node responsible for controlling and supervising all the Slave nodes. The slave node only responses to the tasks the master node is distributed to. When the search begins, query messages will be sent by slave node, and they will play the role of ants and these ―ants‖ will follow the formula that choosing the point with the probability as more pheromone, more possibility. Furthermore, those ants will leave a certain dose of pheromone on the point that will be passed. In order to reflect the change of the pheromone, researchers adopt a local update strategy to modify the pheromone intensity onto the node. When a user task comes, the master job tracker node is responsible for all assignments, of which data resources may be contained by the user image slices spreading in different storage nodes of their slave task tracker node. When a slave node receives assignments sent by the master tracker node, it will begin to search suitable computing nodes as its storage nodes. In the first step, this slave node begins to scan the useable computing resource it has.

III. RELATED WORK The dynamic resource allocation in cloud computing has attracted attention of the research community in the last few years. It is one of the most challenging problems in the resource management problems. Many researchers around the world have come up with new ways of facing this challenge. In [9] authors propose a model and a utility function for location-aware dynamic resource allocation. A comprehensive comparison of resource allocation policies is covered in [10]. Hua’s paper [12] proposed an ant colony optimization algorithm for resource allocation, in which all the characteristics in cloud are considered. It has been compared with genetic algorithm and annealing algorithm, proving that it is suitable for computing resource search an allocation in cloud computing environment. This paper is not intended to address any specific resource allocation strategy, but to provide a review of some of the existing resource allocation techniques. Not many papers which analyses various resource allocation strategies are available as cloud computing being a recent technology. The literature survey focuses on resource allocation strategies and its impacts on cloud users and cloud providers. It is believed that this survey would greatly benefit the cloud users. IV. RESOURCE ALLOCATION STRATEGIES & ALGORITHMS Resource Allocation Strategy is all about integrating cloud provider activities for utilizing and allocating scarce resources within the limit of cloud environment so as to meet the needs of the cloud application.

179

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 7, July 2013) After much experimentation, researchers found that ant colony algorithm is more effective in the case that there are more nodes and fewer resources, which is just the characteristic of cloud environment. Ant colony algorithm aims at the large-scale, shared, dynamic and other characteristics of cloud environment. It assigns search and allocates computation resources to user’s job dynamically. And it shows more advantages in cloud environment.

Master Name node

Job Tracker

Slave Data

nod e

Slave Task Tracker

Data

node

B. Dynamic Resource Assignment on The Basis of Credibility To achieve dynamic configuration and shared use of computing resources, a core issue in the field of cloud, researchers have proposed a variety of schemes. Dynamic resource providing and management have been described in Irwin’s paper [6] and Padala’s paper [7]. Many scholars have put forward solutions for efficient resources allocation based on market mechanisms. CDA (Continuous Double Auction) is one of the common market mechanisms being used currently. It ensures high efficiency and effective coordination of resource allocation. Kenney’s paper [8] proves that web resource allocation based on CDA framework is effective. And Li’s paper [3] presents a cloud resource assignment policy based on CDA framework and Nash equilibrium to fulfill effective resource allocation in cloud environment. The requests for resource under cloud environment typically exhibit strong volatility. To ensure the credibility of dynamic resources without affecting its service efficiency, researchers propose a more credible dynamic resource provision strategy [5]. In the cloud, resource allocation model based on CDA mechanism mainly includes cloud resource providing agent, cloud resource requirement agent, and information balance on the price and the amount of resource for transaction. When entering or leaving a cloud resource system, both the owner of the resource and the cloud user need to be registered to the information serving agent. And the owner will set a price and allocate the resources through resource providing agent, while the user will allocate appropriate amount of resource through resource requirement agent to the jobs needed to be done. In CDA mechanism, resource providing agent, resource requirement agent and information serving agent correspond to the seller, the buyer and the arbiter in the auction respectively. The arbiter is responsible for organizing the auction and collecting market information. At any time unit during the auction, the seller and the buyer offer their own price to the arbiter, and the arbiter will match the resource transactions based on both sides’ price lists and give an average price for both sides.

Task Tracker

Fig. 2 Master Slaves Structure

If it can satisfy the user’s needs which are guaranteed by the provider, slave node will allocate them first to assignments from master node as local computing resource has high priority. Otherwise, it will discover resources in the cloud environment. These discovering should be done in a certain scale to save the network cost. Ultimately, if no resource has been found, slave tracker will announce the master tracker to move the user image slice to other slave trackers. The entire cloud environment is unknown, because the details of the resources are incognizable and the construction of network topology is mutable. In this situation, the location and quality of the computing resource points are incognizable to the storage points. Hua’s paper [12] provides a detailed description about ant colony algorithm in resource allocation and uses Gridsim to simulate local domain of cloud computing to inspect the operating conditions of the algorithm under cloud network environment. Procedure: ACO Algorithm Begin While (ACO has not been stopped) do Schedule activities Ant’s allocation and moving (ant distribution and movement) Local pheromone updates (local pheromone update) Global pheromone update(global pheromone update) End Schedule activities End While End Procedure Through GridResource class and a series of helper classes in Gridsim, researchers simulate the computation and network resources of cloud computing and constructs a relatively real cloud layout.

180

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 7, July 2013) The buyers and the sellers will confirm the quote according to the market transaction environment and their own excitation mechanism. Taking the law of the node failure of cloud resource, and on the basic of CDA mechanism and the credibility of node, researchers proposed a dynamic resource allocation model (T-CDA). First, both sides of supply and demand should confirm their own prices based upon their own pricing strategy. The auctioneer will sort the price of resource provider in a descending order and the price of resource demander in an ascending order. Then it resource and the cloud user need to be registered to the information serving agent. And the owner will set a price and allocate the resources through resource providing agent, while the user will allocate appropriate amount of resource through resource requirement agent to the jobs needed to be done. In CDA mechanism, resource providing agent, resource requirement agent and information serving agent correspond to the seller, the buyer and the arbiter in the auction respectively. The arbiter is responsible for organizing the auction and collecting market information. At any time unit during the auction, the seller and the buyer offer their own price to the arbiter, and the arbiter will match the resource transactions based on both sides’ price lists and give an average price for both sides. The buyers and the sellers will confirm the quote according to the market transaction environment and their own excitation mechanism. Taking the law of the node failure of cloud resource, and on the basic of CDA mechanism and the credibility of node, researchers proposed a dynamic resource allocation model (T-CDA). First, both sides of supply and demand should confirm their own prices based upon their own pricing strategy. The auctioneer will sort the price of resource provider in a descending order and the price of resource demander in an ascending order. Then it will determine whether the resource transaction can be concluded based on the utility model of resource trading. Cheng’s paper [11] presents a simulation experiment on this strategy in Matlab 7.1.Through the result of simulation and evaluation of successful execution ratio and a deviation from fairness in resource allocation, this strategy is proved to be markedly superior to the resource allocation strategies without considering the credibility of nodes.

Current IaaS systems are usually unaware of the hosted application’s requirements and therefore allocate resources independently of its needs, which can significantly impact performance for distributed data-intensive applications. To address this resource allocation problem, an architecture that adopts a ―what if‖ methodology to guide allocation decisions taken by the IaaS is proposed. The architecture uses a prediction engine with a lightweight simulator to estimate the performance of a given resource allocation and a genetic algorithm to find an optimized solution in the large search space. Results showed that TARA reduced the job completion time of these applications by up to 59% when compared to application-independent allocation policies. 1) Architecture of TARA: TARA [2] is composed of two major components: a prediction engine and a fast genetic algorithm-based search technique. The prediction engine is the entity responsible for optimizing resource allocation. When it receives a resource request, the prediction engine iterates through the possible subsets of available resources (each distinct subset is known as a candidate) and identifies an allocation that optimizes estimated job completion time. However, even with a lightweight prediction engine, exhaustively iterating through all possible candidates is infeasible due to the scale of IaaS systems. Therefore a genetic algorithm-based search technique that allows TARA to guide the prediction engine through the search space intelligently is used.

Fig 3.Basic Architecture of TARA [2]

2) Prediction Engine: The prediction engine maps resource allocation candidates to scores that measures their ―fitness‖ with respect to a given objective function, so that TARA can compare and rank different candidates. The inputs used in the scoring process can be seen in Figure1, Architecture of TARA.

C. Topology Aware Resource Allocation (TARA) Different kinds of resource allocation mechanisms are proposed in cloud. The one mentioned in [2] proposes architecture for optimized resource allocation in Infrastructure-as-a-Service (IaaS) based cloud systems.

181

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 7, July 2013) 3) Objective Function: The objective function defines the metric that TARA should optimize. For example, given the increasing cost and scarcity of power in the data center, an objective function might measure the increase in power usage due to a particular allocation.

2) Job Description: Jobs in Nephele are expressed as a directed acyclic graph (DAG). Each vertex in the graph represents a task of the overall processing job. The graph’s edges define the communication flow between these tasks. Job description parameters are based on the number of subtasks, data sharing between instances of task, instance type, number of subtasks per instance.

4) Application Description: The application description consists of three parts: i) the framework type that identifies the framework model to use, ii) workload specific parameters that describe the particular application’s resource usage and iii) a request for resources including the number of VMs, storage, etc. 5) Available Resources: The final input required by the prediction engine is a resource snapshot of the IaaS data centre. This includes information derived from both the virtualization layer and the IaaS monitoring service. The information gathered ranges from a list of available servers, current load and available capacity on individual servers to data centre topology and a recent measurement of available bandwidth on each network link. D. Dynamic Resource Allocation for Parallel Data Processing Dynamic Resource Allocation for Efficient Parallel data processing [4] introduces a new processing framework explicitly designed for cloud environments called Nephele. Most notably, Nephele is the first data processing framework to include the possibility of dynamically allocating/de-allocating different compute resources from a cloud in its scheduling and during job execution. Particular tasks of a processing job can be assigned to different types of virtual machines which are automatically instantiated and terminated during the job execution.

Fig 4. Design Architecture of Nephele Framework [4]

3) Job Graph: Once the Job Graph is specified, the user submits it to the Job Manager, together with the credentials he has obtained from his cloud operator. The credentials are required since the Job Manager must allocate/deallocate instances during the job execution on behalf of the user. V. ADVANTAGES AND LIMITATIONS Let’s have a comparative look at the advantages and limitations of resource allocation in cloud.

1) Architecture: Nephele’s architecture [4] follows a classic master-worker pattern as illustrated in Fig 4. Before submitting a Nephele compute job, a user must start a VM in the cloud which runs the so called Job Manager (JM).The Job Manager receives the client’s jobs, is responsible for scheduling them, and coordinates their execution. It is capable of communicating with the interface the cloud operator provides to control the instantiation of VMs. We call this interface the Cloud Controller. By means of the Cloud Controller the Job Manager can allocate or de-allocate VMs according to the current job execution phase. The actual execution of tasks which a Nephele job consists of is carried out by a set of instances. Each instance runs a so called Task Manager (TM). A Task Manager receives one or more tasks from the Job Manager at a time, executes them, and after that informs the Job Manager about their completion or possible errors.

A. Advantages Major advantage of resource allocation is that user neither has to install software nor hardware to access the applications, to develop the application and to host the application over the internet. Also there is no limitation of place and medium. We can reach our applications and data anywhere in the world, on any system. Cloud providers can share their resources over the internet during resource scarcity. B. Limitations Since users rent resources from remote servers for their purpose, they don’t have control over their resources. Migration problem occurs, when the users wants to switch to some other provider for the better storage of their data.

182

International Journal of Emerging Technology and Advanced Engineering Website: www.ijetae.com (ISSN 2250-2459, ISO 9001:2008 Certified Journal, Volume 3, Issue 7, July 2013) It’s not easy to transfer huge data from one provider to the other. In public cloud, the clients’ data can be susceptible to hacking or phishing attacks. Since the servers on cloud are interconnected, it is easy for malware to spread. Peripheral devices like printers or scanners might not work with cloud. Many of them require software to be installed locally. Networked peripherals have lesser problems. More and deeper knowledge is required for allocating and managing resources in cloud, since all knowledge about the working of the cloud mainly depends upon the cloud service provider.

[5]

[6]

[7]

[8]

VI. CONCLUSION

[9]

In cloud paradigm, an effective resource allocation strategy is required for achieving user satisfaction and maximizing the profit for cloud service providers. This paper summarizes the classification of RAS and its impacts in cloud system. Some of the strategies discussed above mainly focus on CPU, memory resources but are lacking in some factors.

[10]

[11]

REFERENCES [1] [2]

[3] [4]

[12]

Dorian Minarolli and Bernd Freisleben, Uitlity–based Resource Allocations for virtual machines in cloud computing, IEEE, 2011. Gunho Lee, Niraj Tolia, Parthasarathy Ranganathan, and Randy H.Katz, Topology aware resource allocation for data-intensive workloads, ACM SIGCOMM Computer Communication Review, 41(1):120--124, 2011. Li Li, Niu Ben. Particle swarm optimization [M]. Beijing: Metallurgical Industry Press, 2009. Daniel Warneke and Odej Kao, Exploiting dynamic resource allocation for efficient parallel data processing in the cloud, IEEE Transactions On Parallel And Distributed Systems, 2011.

[13]

[14]

183

Valkenhoef G, Famchurn S D, Vytelingum P, et al. Continuous Double Auctions with Execution Uncertainty[C]. Proc. of Workshop on Trading Agent Design and Analysis. Pasadena, California, USA [s.n.], 2009. Irwin D, Chase J S, Grit L et al. Sharing networked resources with brokered leases[C]. Proceedings of the USENIX Technical Conference, Boston, MA, USA, 2006, pp299~212. Padala P, Shin K G, Zhu Xiao Yun et al. Adaptive control of virtualized resources in utility computing environments[C]. Proceedings of the 2nd ACMSIGOPS/EuroSys European Conference on Computer Systems 2007. Lisbon, Portugal, 2007, pp289~302. Kenney J, Eberhart R. Particle Swarm Optimization[C].Proc. of IEEE International Conf. on Neural Networks. Perth. USA [s. n.], 1995. Gihun Jung and Kwang Mong Sim, Location-Aware Dynamic Resource Allocation Model for Cloud Computing Environment, International Conference on Information and Computer Applications(ICICA), IACSIT Press, Singapore, 2012.. Chandrashekhar S. Pawar and R.B. Wagh, A review of resource allocation policies in cloud computing, World Journal of Science and Technology, 2(3):165-167, 2012. Cheng Shiwei, Pan Yu. Credibility-based dynamic resource distribution strategy under cloud computing environment [J]. Computer Engineering, 2011, 6(37):45~48. Hua Xiayu, Zheng Jun, Hu Wenxin. Ant colony optimization algorithm for computing resource allocation based on cloud computing environment [J].Journal of East China Normal University (Natural Science). 2010. Glauco Estácio Gonçalves, Patrícia Takako Endo, Thiago Damasceno, Resource allocation in clouds: Concepts, Tools and Research Challenges. CloudComputing architecture:http://blog.infizeal.com/2011/12/cloud -computing-architecture.html