Scalable Service Containers - IEEE Xplore

3 downloads 21734 Views 548KB Size Report
several issues related to the inadequacies of hosting platforms and mechanisms to ... for application-based service deployement in the Cloud. In addition, we ...
2011 Third IEEE International Conference on Coud Computing Technology and Science

Scalable service containers* Sami Yangui1,2 , Mohamed Mohamed1,2 , Samir Tata1 and Samir Moalla2 [email protected], [email protected], [email protected] [email protected] 1 Institut TELECOM, TELECOM SudParis, UMR CNRS Samovar, Evry, France 2 Faculty of Sciences of Tunis, University Tunis EL Manar, Tunisia

Abstract—Cloud Computing is a new supplement, consumption, and delivery model for IT services based on Internet protocols. It typically involves provisioning of dynamically scalable and often virtualized resources. In this environment, there are several issues related to the inadequacies of hosting platforms and mechanisms to ensure the smooth running of service-based applications (communication protocols, ESB, Service containers, etc.). In particular, architectures and implementations of service containers are not adapted to Cloud environments. In this paper, we present a new service container dedicated to one deployed service that avoids the processing limits of classical services containers. Our approach addresses scalability by reducing memory consumption and response time. The proposed service container is evaluated in several situations against well known services containers within a real Cloud Computing network. Keywords—service container, Cloud Computing, scalability.

interaction. Although there is no consensus or definition of the concept of Cloud there are few common key points in these definitions. Cloud Computing is a specialized distributed computing paradigm [3]. It differs from traditional ones on the fact (1) it is massively scalable, (2) it can be encapsulated as an abstract entity that delivers different levels of services to customers outside the Cloud, (3) it is driven by economies of scale [17] and (4) can be dynamically configured (via virtualization or other approaches) and delivered on demand. We are motivated by the following finding. Classical service containers such as Axis are not adequate for elasticity and scalability. Consequently, they are not in line with characteristics of Cloud environments. For example, the occupied memory of these classical containers is limited to the size of the memory of the physical node on which they are deployed even though virtualization techniques are used. We have designed and implemented a new service micro-container to make the tasks performed previously by classical service containers possible in a Cloud environment [4]. In this paper, we propose a new model for service containers in Cloud environments (called service micro-containers) for application-based service deployement in the Cloud. In addition, we present its architecture, implementation and experimentation in a real Cloud Computing network to highlight our contribution and demonstrate its scalability. This paper does not discuss all the aspects that are relevant to service deployment, execution and optimisation in Cloud environments. For example, we do not deal with access control, reliability and security. While we believe that these issues are important, our contributions discussed here are complex enough in themselves to deserve separate treatment. This paper is organized as follows: Section 2 presents a state of the art of Cloud Computing environments and the motivations of our work. In Section 3, we present the model of our service micro-container, its architecture and its implementation procedure. Section 4 presents the experimentations of our developed service micro-container. Finally, we conclude our paper and present our future work in Section 5.

I. I NTRODUCTION Web services can be seen as a pillar block for achieving electronic B2B transactions. For worries of competitiveness and rentability, more and more companies are using Web services or even adopt new economic models based on new concepts such as Entreprise 2.0 to adapt to new demands and expectations [1]. These new communication paradigms and information processing rely heavily on the use and interaction between services. To make their services online, companies can set up their own infrastructure or can adopt the new economic model offered by Cloud Computing. Cloud computing describes a new supplement, consumption, and delivery model for IT services based on Internet protocols. It typically involves provisioning of dynamically scalable and often virtualized resources. There is no consensus on a definition of Cloud Computing [2]. In [3], the authors define Cloud Computing as a large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted, virtualized, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over the Internet. In [24], the authors try to present an academic definition of Cloud Computing. Briefly, the NIST defines Cloud Computing as a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider

II. S TATE OF THE ART AND MOTIVATION In this section we present a state of the art of Cloud Computing, how to deploy service in such environments as well as the motivations of our work. Firstly, we review different proposed APIs for service deployement in the Cloud before presenting service containers and continue with our work motivation.

*The work presented in this paper was partially supported by the French CompatibleOne project.

978-0-7695-4622-3/11 $26.00 © 2011 IEEE DOI 10.1109/CloudCom.2011.54

348

A. APIs for service deployement in Cloud environment

limit is the same limit of the physical machine in which we deployed the service container. This machine presents a bottleneck in every Cloud using such containers for servicebased applications. Axis is one of the service containers that can handle a big number of services at the same time and response to client’s queries in an acceptable time [15] [16]. A preliminary experimental study on the behavior of Axis allowed us to highlight its lacunas for deployment and management of a huge number of Web services. Fig. 1 and Fig. 2 below show respectively the behavior of the response time and memory resource consumption for Axis regarding the number of deployed services. To perform these experiments, we have used one machine with 0.5 GHz of CPU and 512 Mb of memory. We have also developed a test collection generator to obtain thousands of generated Web services code archives and their WSDL files. The functionality which implements these Web services is the same: calculation of an arithmetic operation of two integers. At each iteration of the experiments, we deployed a number of services set in advance and then we took the measures using a classic java client.

Cloud providers offer different API to access to their Cloud services. We can cite among other the following APIs: Amazon API [5], GoGrid’s API [6], Sun’s Cloud API [7] and VMware’s vCloud [8]. The Service Oriented Architecture (SOA) is one of the principle architectures related to “Everything as a service” paradigm (XaaS) [12] like IaaS for infrastructure as a service, PaaS for platform as a service and SaaS for software as a service and so on [9] [13]. Taxonomy of Cloud Computing systems shows that all the existing systems are limited to a programming framework which makes the use of those Clouds difficult, since Cloud clients need to use the related programming language before using the Cloud [13]. For example, Amazon imposes Amazon Machine Image (AMI) and Amazon MapReduce framework, Force.com imposes Apex language for database service, Azure imposes Microsoft.Net [10] [11], Google App Engine imposes MapReduce programming framework [10] [14], and so on. Similarly, the same type of constraints are imposed on developers to deploy service-based applications in the Cloud which often complicates the task and raises portability and compatibility issues. B. Service containers for service deployement in Cloud environment Service-based application deployement in Cloud environments needs the use of service containers to manage the life cycle of the provided services part of the deployed applications. In [18], the author presents containers as mechanisms for managing the life cycle of components that run in them. The container hosts and provides services that can be used by applications during their execution. To deploy an application in a container, one must mainly provide two elements: the application with all its components (compiled classes, resources, etc.) and a deployment descriptor that specifies the container options to run the application. For example, for the J2EE platform, there are several types of containers: Web containers, for servlets and JSP, EJB containers, for EJB, and client containers for applications on standalone terminals using J2EE components. In line with the definition given in [19], we can define a Web container as an application that implements the communication contract between different application components obeying a distributed architecture. This contract specifies a runtime environment for Web components including safety and competition management, lifecycle, transactions, deployment and other services. Web containers can generally use their own Web server and also be used as a plug-in in a dedicated Web server (as is the case with Apache servers or Microsoft IIS). Examples of application containers are Tomcat (a J2EE container implementation) and Axis which are open sources projects from Apache. After studying different architectures of service containers we realized that they aren’t able to scale among many physical machines. Any of those containers can be deployed physically just on one machine, so the Cloud using such containers will reach its limits when this physical machine uses its entire resources even if the other machines are charge free. We can say that the Cloud’s

Fig. 1.

Axis response time

On the one hand, by analyzing the curve shown in Fig. 1, we note that the response time of an Axis client request is too large from 600 deployed services. We also noted a total crash of Axis from 630 deployed services. On the other hand, regarding the consumption of memory resources shown in Fig. 2, we noticed an important increase of the consumption especially for high numbers of services. These two aspects of behavior of Axis are characteristics of sevral classical Web containers we studied and represent the two major defects which prevent these containers to scale.

Fig. 2.

349

Axis memory consumption

C. Analysis

handling these resources [25]. This specification is already implemented in many infrastructure managers of Cloud and consists of 3 parts: • OCCI core that defines a meta-model for Cloud resources (IaaS and PaaS levels) [26] as shown in the top layer of Fig. 3, • OCCI infrastructure which is an instanciation of the OCCI core meta model to model infrastructure resources [27]. This specification, summarized by the middle layer in Fig. 3, defines IaaS resources in three types (Network, Compute and Storage) and also classifies the relationship between resources in two types (StorageLink and NetworkLink), • OCCI platform which is our extension to the OCCI specification. The extension is represented in the bottom layer in Fig. 3. This model is directly inspired by OCCI infrastructure. Depending on OCCI platform model, we have classified PaaS resources into three types: • SE (Service Engine) is a deployment container and an engine to run services. For each type of services, we will need to deploy an appropriate SE, • BC (Binding Component) is a resource that provides protocols messages transport between PaaS components, • MessageRouter is a transport channel and routing formatted messages. To connect a BC or SE to MessageRouter, we can use the link MessageRouterInterface. The application servers and other service containers to deploy in the PaaS must be modeled as a specialization of the SE resource. In this context, we thought to make our proposed micro-container as a model of SE that may contain any resource to be deployed on a Cloud platform.

These studies highlight the operational limits of classical deployement and hosting service based application structures in Cloud environment. Firstly, to deploy services on a Cloud, one may use the proprietary PaaS framework of this Cloud (e.g. Amazon API, GoGrid’s API, etc.) which poses additional constraints to users related to the compatibility and portability of their applications. Another way to deploy services in the Cloud is to use classical service containers that are themeselves deployed in the Cloud. Based on obtained measures in the previous curves, it is clear that classical containers are not scalable and thus unsuitable for use in Cloud environments. Our objective is to propose a newcomer service container which can be scalable and independent of API Cloud platforms. We think that if each service is deployed separately, the Cloud can really contain as much deployed services as it is allowed by the available physical resources of the Cloud. Each service can be deployed anywhere in the Cloud with the minimal use of its resources. We got the idea to create a service micro-container that is able to contain just one service. This micro-container provides the minimal functionalities to manage the life cycle of the deployed service. We can deploy as many micro-containers as it is possible on any machine, if this machine reaches its limit we can deploy on a second one then on a third and so on. With this idea we will show that we use the minimal resources to encourage the pay as-yougo model of Cloud Computing [17] and we can enforce the elasticity of Cloud because we use just the resources needed. III. S ERVICE MICRO - CONTAINER We shall focus on a particular type of application containers called service containers. Of course, service containers meet all specifications for the application containers already mentioned and provide, in addition, support and management of services. As far as we know, Tomcat and Axis are both among the most popular application containers used by developers. In this section, we present our proposed service micro-container. we begin this section with the description of the microcontainer model. Then, we present the architecture of our micro-container. Finally, we describe the various processes and impress implementation of the micro-container.

B. Service micro-container architecture For optimality and performance constraints, features of our micro-containers will be as minimal as possible. After studying the features provided by classical service containers and other container architectures, we drew up a list of basic features that should satisfy our micro-container which directly reflects the different components that make up our micro-container architecture. These basic components ensure the minimal main process of our micro-container (services hosting, interaction with clients, etc.). For example, we failed to incorporate a safety module in micro-container for managing access since it is a prototype and management competitions module as we are assuming a single service per container. However, the design of the micro-container was made so that these modules can be added on micro-container as extensions or add-ons if necessary. We thought of designing a system composed of two main parts: (1) the generated micro-container and (2) the generic platform that can build it. We believe that we will need the following modules: • A transport module for user requests reception and for sending responses, • A communication processing module for analyzing clients messages elements and interpret/generate response contents,

A. Model for service containers As part of our research, we propose to carry out a metamodel of a platform (PaaS) for the Cloud. This PaaS is a service delivery model that allows customers to deploy and invoke service-based applications in particular. To do so, we focused at the beginning on the modeling of all the resources that may include or offer a PaaS. A resource of PaaS is then an elementary platform composant and is able to offer or consume a particular service according to its purpose. Hence, our idea of realizing the micro-container presented in this paper. The latter are suggested as the embryo of the PaaS platform to design and implement. To model the PaaS resources, we opted to extend the Open Cloud Computing Interface (OCCI) specification mainly dedicated for IaaS resources and defining a REST API for

350

Fig. 3. • •

OCCI Plateform

bindings types(Fig. 4, Action 5). The processing module then instantiates communication packages implementing these bindings available at the communication generic package (Fig. 4, Actions 6 and 7), associates them to the code and sends the new resulting code to the assembly module (Fig. 4, Action 8). The latter compiles the code and generates the micro-container (Fig. 4, Action 9). Micro-container component is responsible of managing the communication with the client, holding the service and processing all the messages incoming or outgoing the micro-container (Fig. 4, Actions 10 and 11). It is composed only of the necessary modules for the deployed service, no more, no less. The architecture of the obtained micro-container is shown in Fig. 3 and shows three main components. Each of them ensures one feature from the list of the features introduced above: • Communication module to establish communication and to support connection protocols, • Processing module to process ingoing and outgoing data into and out of the server (packing and unpacking data), • Service module to store and invoke the requested service. It is also useful to note that a client can also ask to download a WSDL from the micro-container (Fig. 4, Action 13).

A process module for the invocation and processing deployed services, A deployment module for deploying a developed service in a micro-container.

This module should contain not only processing modules to ensure minimal micro-containers generation process (WSDL Parser, Compiler, etc.) but also a set of generic elements on the submission and treatment of non-functional features to be included in these micro-containers (HTTP/RMI or another generic Communicators, service administration tools, service billing, etc.). The global architecture of the performed system is formed by several components detailed in Fig. 4: •

• •

A deployment platform for addressing the service code and a deployment descriptor for the generation of a corresponding micro-container within the service to deploy, Micro-container to host the deployed service and handle various clients requests, Thin clients to invoke services via micro-containers.

The deployment framework is responsible of the generation of the micro-container component. Processing module is the main component of this framework. To generate a microcontainer with a service hosted in, one must provide its source and a deployment descriptor file which describe how to assemble and deploy the micro-container into a Cloud environment (Fig. 4, Actions 1 and 2). Specifically, the processing module sends directly the deployment descriptor to the assembly module (Fig. 4, Action3) before analyzing the service code and generating the correspondent WSDL description [20]. Then, the generated WSDL description is transmitted to the WSDL parser (Fig. 4, Action 4) which notifies the processing module by services

C. Implementation For the implementation, we take the choice to exploit a particular type of services namely Java Web services with WSDL 2.0 description. A Web service is an elementary service-based application. The implementation process took place in four phases. We have first developed a minimal Java deployment framework, which allows developers to deploy

351

Fig. 4.

Micro-container architecture

a Java Web service on a hardcoded micro-container before deploying both of them in the Cloud. After that, we developed the processing module for generating and deploying an optimal and minimal micro-container. All generation steps were then carried out exclusively by this module before implementing gradually any other platform components. The purpose of the distribution of workload across multiple components enhances the performance of the platform and facilitates updates and future changes. We have also developed Java clients which send requests to the service micro-containers and display results returned by the deployed Web services (Fig. 4, Actions 10 and 11).

1) 2) 3) 4)

Receipting of the client request, Extracting HTTP SOAP envelopes, Invocation of requested Web service, Building of the response message and send it to the client. The generation module aims at supporting several communication protocols between the service micro-container and the clients. Later, during the deployment, only necessary communication protocol is encapsulated in the micro-container. Deployment framework components were also developed in Java. So, the assembly module contains essentially a Java compiler which compiles resulting code sent by the processing module (Fig. 4, Actions 3 and 8)to obtain correspondent class files before generating the micro-container as a Java archive file (Fig. 4, Action 9). The latter represents an executable file which can be deployed and executed in a Cloud environment. The next section presents some experiments of our microcontainer related to response time and memory consumption in a Cloud environment.

At the first iteration of our implementation, we mainly focused on automating generation and deployment process of a service micro-container from a WSDL and code provided by a developer (Fig. 4, from action 1 until action 9). Then, in the second iteration, our concern was to alleviate as much as possible the generated micro-container for performance reasons and scalability constraints. In other terms, we had to refine the generation process. To do this, we defined a generic component of communication in the deployment platform to identify and contain all the communication protocols that can support a service . The generation process is based primarily on results Bindings components detected by the WSDL parser (Fig. 4, Action 4) and secondly by the activation of corresponding communication modules from the generic communication package (Fig. 4, Action 7). The interactions between these platform components are orchestrated by the processing module (Fig. 4, Actions 3 and 8).

IV. E XPERIMENTATION To perform experiments, we chose to evaluate the performance of our micro-container opposite to Axis2. This choice is motivated by the performance of Axis2. Indeed, as far as we know, Axis2 is one of the most efficient classical services containers. A set of tests has already been achieved on an older version of the micro container [4]. In latest experiments, We have decided to perform tests in one machine even for the micro-container. Axis2 can be deployed on one machine. Nevertheless our container can be deployed over more than one physical machine, hence, when our container won Axis2 using just one machine [4] so we can expect for these new experiments can defeat Axis2 more easily on large scales Cloud context included, because our micro-container can scale

In addition, before generating the communication module of the micro-container, we have imagined the scenario that traces the sequence of events from a request reception until sending a response. Indeed, for a SOAP over HTTP communication, the execution of this module takes place in four steps:

352

without any interaction between the services deployed unlike any other container. In addition, the former experiments allowed us to review and improve mico-container architecture and therefore its performance. For these experiments, we have considered a couple of criteria: • Response time: Time taken by a service container between request reception instant and response sending instant, • Memory consumption: Memory size necessary to load and process deployed service in services container after receiving a request. We intend to further deepen these tests while passing on a large scale in order to demonstrate the scalability and the high performance of micro-containers against Axis in a Cloud context. To perform these tests, we used the NCF (Network and Cloud Federation) experimental platform deployed at Telecom SudParis France. The NCF experimental platform aims at merging networks and Cloud concepts, technologies and architectures into one common system. NCF users can acquire virtual networks (or flash slices) and Cloud services (computing, storage, networking) to deploy and validate their own solutions and architectures. The hardware component of the network is in constant evolution and has for information at this moment 380 Cores -Intel Xeon Nehalem, 1,17 TB RAM and 100 TB as shared storage. Two Cloud managers allow managing this infrastructure and virtualize resources i.e. OpenNebula [21] and OpenStack [22]. In our case, we used for our experiments OpenNebula. OpenNebula is a virtual infrastructure engine which provides the functionality needed to deploy, monitor and control virtual machines (VMs) on a pool of distributed physical resources. The OpenNebula architecture has been designed to be flexible and modular to allow its integration with different hypervisors and infrastructure conjurations [23]. Indeed, it is responsible for finding available resources, creating VMs based on a master image and deploying the image to the physical host. Each VM gets contextualized in order to have a unique name, a unique MAC address, IP and a virtual network (vNet) ID. For VMs creation with OpenNebula, one can specify predefined templates (Big, Medium or Small) for CPU and memory characteristics or define its own templates. Characteristics of the templates we used for experimentations are detailed in the table below: Template name T1 T2 T3 T4 T5 T6 T7 Tp

CPU(MHz) 1 1 0,5 0,5 0,25 0,25 0,25 3

obtain thousands of generated Web services code archives and their WSDL files. The functionality which implements these Web services is the same: sum of two integers. To perform these tests, we defined firstly several scenarios with different alternatives. These scenarios reflect the objectives that we want to highlight in our experiments. The details of these experiments are as follows : 1) Compare service containers performance in different VMs, 2) Confront Axis container and the micro-container (MC) in VMs with low memory, 3) Confront Axis container and MC in VMs with low CPU parameter, 4) Confront Axis container and MC in overpowering VMs, 5) Compare the number of used VMs by Axis and microcontainer to deploy a huge number of services. A. Axis Versus Mico-container with various VMs In the first series of tests, we have deployed one same service on Axis and on the micro-container. Each time in the test, we deployed these containers on different virtual machines created by various templates and then we took measurements. The purpose of this experiment was to see the impact of the VM template choice on performance of the two containers. Fig. 5 shows the different stored values for Axis and MC for one service response time deployed in these VMs while Fig. 6 shows their memory consumption.

Fig. 5. MC)

Memory(kb) 1024 512 1024 512 512 128 64 3072

Time response evolution with different VMs templates (Axis2 Vs

Fig. 6. Memory consumption evolution with different VMs templates (Axis2 Vs MC)

TABLE I VM S TEMPLATES

These experiments show that for the same deployed service and for the same VMs templates MC is better than Axis in terms of performances and consumes less memory. However,

We have also developed a test collection generator to

353

for the same service container, we do not record any major changes in performance or even the memory consumption from a VM template to another.

more services. Actually, Axis spilled into the virtual machine due to its excessive memory consumption. However, our micro-container reached more than 2000 deployed services using our defined approach with the same performances and without any problem. This interpretation is also verified by the memory consumption measures presented in Fig. 8.

B. Axis Versus Mico-container with less memory VM (T4 template) For the second experimentation scenario, we have chosen to use identical VMs to host the two containers and vary the number of deployed services. The template of these VMs is T4, a low memory template. Fig. 7 below shows the different stored values for response time experiments. During these experiments, we had to make a choice between (1) test by comparing Axis performance versus a single instance of the micro-container performance (2) or test by comparing total CPU time between all instances of deployed micro-containers running in parallel versus Axis. We opted for the first test plan because we chose to compare performance of the two application servers with the same test collection (deployed services).

C. Axis Versus Mico-container with less CPU VMs (T5 template) In the third experimentation scenario, we repeated the same tests by changing only the VMs template. Specifically, we increase the memory capacity of VMs by changing the last template by the T5 template when creating VMs (1Gb of memory instead of 512 Mb). The purpose of this operation is to avoid Axis crash observed in the previous experiment by providing more memory in VMs. Fig. 9 shows the different stored values for memory consumption experiments while Fig. 10 shows memory criteria consumption for the two containers.

Fig. 9.

Fig. 7.

Time response evolution-Axis2 Vs MC (T4 VM template)

Fig. 10.

Fig. 8.

Response time evolution- Axis2 Vs MC (T5 VM template)

Memory consumption evolution-Axis2 Vs MC (T3 VM template)

We notice that the memory usage is linear, increasing with the number of services in the two sides. Results show the savings of the micro-container against Axis in memory usage. That is due to the large number of files generated using Axis container for each deployed Web service (archives, indexation, temporary files, context files, etc.). Those files’ size is larger than a micro-container’s size. It should be also noted that Axis has exceeded its limits observed in the latest experience due to the lack of memory. Henceforth, Axis crashes after deploying 1230 services because of the overflow of the CPU virtual machine. However, for the same VMs, we arrived at deploying 2000 services on micro-containers without any problems and with steady time responses. All this inspired us to repeat these tests with overpowering templates. the detail of these tests is detailed in the next subsection.

Memory consumption evolution-Axis2 Vs MC (T4 VM template)

Axis time response increases proportionally with the number of deployed services. Specifically, these values represent the time needed for Axis to load a service, update indexations, contexts and execution. The interpretation time of request and the time for the building of a response from a result are always the same. For our micro-container, the response time is almost the same for all the experiences, because every instance of the micro-container is independent from the others, hence, we can deploy as many micro-containers as it is possible regarding the available resources in the virtual machine without affecting the response time. In this environment, Axis crashed when 630 services are deployed because there is no more memory resources on the virtual machine for deploying and processing

354

D. Axis Versus Mico-container with poweful VM (Tp template)

certain level of services which degrades more its performance compared to MC. The idea that we had was to determine the optimal range number of services that Axis seems as efficient as possible and confront the two containers performance in this range. This can easily be determined by a simple calculation of the ratio of the memory space used for a number of deployed services in Axis based on the values of the curve in Fig. 12. This optimal range of Axis is located between 800 and 1000 services. Now we conclude from the curves of Fig. 12 and Fig. 13 that even for this optimum Axis2 operation range, MC beats Axis2.

In the fourth scenario test when creating VMs (from Tp template defined in Table 1) in order to eliminate all physical limits which penalize Axis. We have created two virtual machines using this template and we repeated exactly the same test operations carried in the second and third scenarios. Fig. 11 represents the obtained response time values.

E. Axis Versus Mico-container with mutiple VMs usage (T6 template)

Fig. 11.

Time response evolution-Axis2 Vs MC (Tp VM template)

Fig. 12.

Memory consumption evolution-Axis2 Vs MC (Tp VM template)

The last scenario of experiments consists of the use of multiple VMs. The principle is simple; we begin by deploying the same services respectively on Axis and MC until saturation of their VMs. Next, we create another VM, and we continue to deploy services in it and so on. This scenario simulates the operating procedure in a Cloud network and tests the scalability of the two containers. For these tests, we used T6 VM template to reach quickly the VMs limits. With each new deployment operation, we note the measures of each container. When multiple machines are used, the memory consumption values represent the sum memory spaces used on all allocated VMs.

As we can see in the curves above, Axis reaches beyond these limits observed in previous scenarios. However, it really crashes for 3860 deployed services while CPU and memory still have available resources, as shows it to us Fig. 12, in the host VM. In fact, this blockage domicile in the design of Axis shown even, which begins to fail from a certain number of services even if the host still have resources. This is an intrinsic problem related to the architecture of Axis, which is certainly very powerful and convenient for an industrial exploitation but very unreliable and inadequate for such environments just like all other classical service containers. We note also that in these experiments scenario, we were able to deploy more than 4,000 services on micro-containers. At every moment of the test we obtained a faster response time for the MC for the same invoked service and less memory consumption well. For our approach, the limit number of deployed services on micro-containers is the physical limit of the VM while Axis crashes when it reaches its logic limit of services hosting before reaching the deployed service limit number in microcontainers. In addition, by analyzing the current Axis curves, we noticed a difference in its services management behavior which depends mainly of the number of deployed services. In fact, Axis begins to generate much bigger context and temporary files from

Fig. 13. Time response evolution in multipleVMs-Axis2 Vs MC (T6 VM template)

Fig. 14. Memory consumption evolution with multiple VMs-Axis2 Vs MC (T6 VM template)

The results of these experiments are shown in Fig. 13 and Fig. 14. As we demonstrated above that MC is much less memory consuming than Axis, it makes sense to use less VMs for MC to host the same number of services that Axis and of course to keep the performance advantage of MC.

355

Indeed, for 1200 deployed services, we have used four VMs to host Axis with these services while we use only one for MC with the same services deployed. We will have to use a second VM from MC when more the 1830 services are deplyed. Another scenario showing the superiority of MC is the use of the same number of VMs consumed by Axis, to deploy maximum services on MCs. So, we have successfully deployed more than 5,000 services, while for Axis only 1200 services are deployed.

[14] J. Dean, S. Ghemawat.: MapReduce: Simplified Data Processing on Large Clusters. In: Sixth Symposium on Operating System Design and Implementation (OSDI’04), San Francisco USA (March 2004) [15] L.R. Merino, L.M. Vaquero, V. Gil, F. Galan, J. Fontan, R.S. Montero, I.M. Liorente.: From infrastructure delivery to service management in Clouds. In: Future Generation Computer Systems, vol.26, pp 1226-1240 (2010) [16] Axis2 Web site, http://axis.apache.org/axis2/java/core/ (2011) [17] R.L. Grossman.: The Case for Cloud Computing. In : IT Professional, vol.11 no.2, pp 23-27 (2009) [18] http://www.jmdoudoux.fr/java/dej/chap048.htm (2011) [19] http://en.wikipedia.org/wiki/Web_container (2011) [20] http://axis.apache.org/axis/java/ant/axis-java2wsdl.html (2011) [21] http://opennebula.org/ (2011) [22] http://www.openstack.org/ (2011) [23] B. Sotomayor, R.S. Montero, I. M. Llorente and I. Foster.: Capacity Leasing in Cloud Systems using the OpenNebula Engine. In : Workshop on Cloud Computing and its Applications 2008 (CCA’08), Chicago USA(October 2008) [24] P.Mell and T.Grance.:The NIST Definition of Cloud Computing. In :National Institute of Standards and Technology Vol.53, Issue 6, pp 50 (2009) [25] T.Metsch, A.Edmons and V.Bayon.: Using Cloud Standards for Interoperability of Cloud Frameworks. A technical RESERVOIR report (2010) [26] T.Metsch, A.Edmons and R. Nyren.: Open Cloud Computing Interface Core, http://www.gridforum.org/Public_Comment_Docs/Documents/201012/ogf_draft_occi_core.pdf (2010) [27] T.Metsch and A.Edmons.: Open Cloud Computing Interface Infrastructure, http://www.gridforum.org/Public_Comment_Docs/Documents/201012/ogf_draft_occi_infrastructure.pdf (2010)

V. C ONCLUSION In this paper, we have highlighted the limits of classical service containers in Cloud Computing environments and we have proposed a new prototype of micro-containers which perform these limits. The work presented in this paper is based on a simple idea that consists in dedicating a micro-container to each deployed service in a context of Cloud environment. Only necessary resources, such as communication protocols, are encapsulated in the micro-container to host the deployed service. We have also described the model to use our micro container, presented its architecture and the deployment framework that is in charge of generating the micro-container. We have in addition presented the implementation process of our micro-container. Our realization is evaluated and compared versus Apache Axis container following different experimentation scenarios that we defined in advance to highlight our contribution and demonstrate its scalability. In the near future, we plan to integrate mobile agents technology on microcontainers to bring them a mobility aspect necessary on a Cloud context environment. We believe that mobility aspect is crucial for the use of our micro-container model as an entity of the OCCI platform to staisfy such constraints Cloud context. We plan also to extend the micro-container experiments by comparing the performance of a platform consisting of microcontainers as the basic plateform element versus another known platforms like Google App engine or Windows Azure. R EFERENCES [1] [2] [3] [4]

[5] [6] [7] [8] [9] [10] [11] [12] [13]

G.Cliquet.: Method of innovation in the era of Web 2.0, Paris Institute of Technology (ParisTech) PhdThesis (2011) Twenty Experts Define Cloud Computing, SYS-CON Media Inc, http://Cloudcomputing.sys-con.com/read/612375_p.htm (2011) I. Foster, Y. Zhao, I. Raicu and S. Lu.: Cloud Computing and Grid Computing 360-Degree Compared. In: The IEEE Grid Computing Environments (GCE’08), Austin USA (Nov 2008) M. Mohamed, S. Yangui, S. Moalla and S. Tata.: Web service microcontainer for service-based applications in Cloud environments. In : The 20th IEEE International Conference on Collaboration Technologies and Infrastructures (WETICE’11), Paris (June 2011) J. Varia. Amazon white paper on Cloud architectures, http://aws.typepad.com/aws/2008/07/white-paper-on.html (2011) GoGrid Web site, http://www.gogrid.com (2011) The sun Cloud API, http://kenai.com/projects/sunCloudapis (2011) vCloud API programming guide, VMWARE Inc.(2011) A. Goscinski, M. Brock.: Toward dynamic and attribute based publication, discovery and selection for Cloud Computing. In: Future Generation Computer Systems, vol. 26, pp 947-970 (2010) Microsoft Azure, http://www.microsoft.com/azure/default.mspx (2011) Google App Engine, http://code.google.com/appengine (2011) Gathering Clouds of XaaS! : http://www.ibm.com/developer (2011) B. Prasad Rimal, E. Choi, I. Lumb.: A Taxonomy and Survey of Cloud Computing Systems. In : The fifth International Joint Conference on INC, IMS and IDC (NCM’09), Seoul Republic of Korea (Aug 2009)

356