Cloud Computing for Cloud Manufacturing: Benefits ...

8 downloads 449767 Views 793KB Size Report
ing such that cloud-based service should be considered as the major platform for data storage and analysis? (2) What criteria should be considered in order to ...
Cloud Computing for Cloud Manufacturing: Benefits and Limitations Peng Wang Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH 44106 e-mail: [email protected]

Robert X. Gao1 Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH 44106 e-mail: [email protected]

Zhaoyan Fan Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06268 e-mail: [email protected]

Cloud computing, as a new paradigm for aggregating computing resources and delivering services over the Internet, is of considerable interest to both academia and the industry. In this paper, the main characteristics of cloud computing are summarized, in view of its application to the manufacturing industry. Analytic models such as analytic hierarchy process (AHP) method for selecting appropriate cloud services are analyzed, with respect to computational cost and network communication that present a bottleneck for effective utilization of this new infrastructure. The review presented in this paper aims to assist academic researchers and manufacturing enterprises in obtaining an overview of the state-ofthe-knowledge of cloud computing when exploring this emerging platform for service. [DOI: 10.1115/1.4030209] Keywords: cloud computing, evaluation, cloud manufacturing

1

virtualization,

performance

Introduction

As an emerging infrastructure for delivering on-demand service, cloud computing has enabled the transition of computation from owning, operating, and managing computing resources such as platform, hardware, and software, to renting these resources, depending on their quality and service requirement. Enabled by the Internet, cloud computing has already demonstrated a wide range of applications, from financial management [1], pharmaceutical development [2], and manufacturing [3] to academic research [4]. Enabled by cloud computing, owning a high-performance computer locally no longer presents a hard constraint for researchers who perform complex computations [5] or companies that need to promptly respond to remote service request [6]. For such computational needs, cloud-based computing resources can benefit customers in three aspects [7]: (1) Lower start-up and operating costs: by employing a “payas-you-go” system, cloud-based service helps reduce tenants’ up-front investment and computing resource waste. 1 Corresponding author. Contributed by the Manufacturing Engineering Division of ASME for publication in the JOURNAL OF MANUFACTURING SCIENCE AND ENGINEERING. Manuscript received November 19, 2014; final manuscript received March 10, 2015; published online July 8, 2015. Assoc. Editor: Xun Xu.

(2) Scalability and ease of access: by aggregation of distributed computing resources over the Internet, a cloud-based service can scale up or down per tenants’ service demand. (3) Lower risk on resource provision: by outsourcing the computing resources to the clouds, tenants shift their risks resulting from misestimating computing load or equipment shut-downs to cloud providers who have expertise and experience on managing such risks. For the manufacturing industry, cloud computing has the potential to introduce a new mechanism for more cost-effective equipment and process monitoring by relaying local monitoring data via cloud-based servers to remote analytic centers where expertise is available for advanced analysis and rational decision-making [8–10]. In addition, cloud computing can boost the efficiency of procedures in manufacturing that need a large amount of computational effort, such as modeling during the design process [11,12]. Toward this goal, this paper aims to answer the following questions that are relevant to introducing cloud computing to manufacturing: (1) What benefits does cloud computing bring to manufacturing such that cloud-based service should be considered as the major platform for data storage and analysis? (2) What criteria should be considered in order to choose the most appropriate cloud service, given that there are multiple cloud providers offering different types of services? (3) What are the specific challenges when adopting cloud computing for manufacturing, considering the specific requirements associate with manufacturing environment, such as the large amount of sensing data acquired during manufacturing processes, multiphysics nature of the data, real-time or quasi real-time response requirement, etc.? Motivated by cloud computing, the evolution of CM can be regarded as a manufacturing version of cloud computing, realized by aggregating distributed manufacturing resources, then extracting and virtualizing them as services. Cloud computing provides the core technical support for CM and enables advanced concepts in manufacturing, such as networked and virtual manufacturing [13,14]. Data obtained during a manufacturing process, such as force [15], temperature [16], and acoustic emission [17] are uploaded via Internet to cloud, on which data storage and analysis are performed, providing the basis for data and information sharing [18,19]. Such sharing over interconnected manufacturing resources leverages the original procedures in the manufacturing life cycle, from product design, production, condition, and process monitoring to maintenance decision-making and inventory management [20–23]. For example, maintenance decision-making can benefit from the exchanged knowledge and experience by comparing results from condition monitoring analysis to established information base through techniques such as crowdsourcing [24]. Furthermore, processes involved in manufacturing can be extracted and virtualized as individual services, fulfilled by service brokers other than manufacturers themselves, where the virtualization is enabled by advanced Internet of things (IoT), such as MTConnect, high performance computing, and storage via cloud computing, as shown in Fig. 1. A major feature of cloud computing is multiple tenant-oriented service, meaning different service users can run different platforms or software concurrently on the same infrastructure. For cloud service providers, upholding obligations to customers while maximizing utilization of the cloud infrastructure are dual requirement. However, it is in general very difficult to anticipate all possible service request scenarios [25]. Furthermore, dynamic provision of computing resource is required to adapt to changes in the service load, especially when unanticipated service request surges. An efficient resource management scheme would require dynamic assignment of service requests from customers with

Journal of Manufacturing Science and Engineering C 2015 by ASME Copyright V

AUGUST 2015, Vol. 137 / 040901-1

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use

Sec. 4, limitations of the current cloud computing techniques when applied to manufacturing are summarized. Conclusions are drawn in Sec. 5.

2

Fig. 1

CM enabled by cloud computing

minimal resource needed to satisfy the acceptable fulfilment of service level agreements offered by cloud providers. This is enabled by two main supporting technologies: virtualization and data distribution. Another important aspect of cloud computing is data storage and management. In cloud computing, data storage and data processing are not executed on the same device. Data uploaded by tenants is flexibly distributed across a wide range of resource, especially when a large volume of data is involved. As an example, real-time manufacturing process monitoring may incur data production at the level of gigabytes/hour, making it difficult to store on a single server. Solution to this problem is distributed data storage, for which data consistency poses a challenge, when data are extracted from a broad range of replicated data sources at the data processing server. In addition, when replicating data across various data centers, the system needs to be aware of the exact data location while taking latencies and workload into consideration. This affects the quality of service (QoS), which need be considered when selecting cloud service providers [26]. Prior studies have reviewed the benefits and challenges of cloud computing. Armbrust et al. [7] summarized several critical obstacles to the advancement of cloud computing, in terms of adaptation, growth, and policy, and identified opportunities to overcome these obstacles. Studies in Refs. [25,27–30] highlighted the key concepts, architectural principles, state-of-the-art implementation, and research challenges associate with cloud computing. Garcıa-Valls et al. [6] identified challenges in supporting real-time applications in the cloud, and presented recent advancements in real-time virtualization and cloud computing. Manvi and Shyam [31] conducted a survey on infrastructure as a service (IaaS) in cloud computing, focusing on resource provisioning, allocation, mapping, and adaption. Garg et al. [32] proposed a framework that can allow customers to evaluate cloud offerings and rank them based on their ability to meet the user’s QoS requirements. Leveraging the previous work, this paper focuses on the perspective of applying cloud computing to benefitting manufacturing. The main technologies supporting cloud computing for manufacturing are discussed and summarized, considering the specific challenges associated with a manufacturing environment. Moreover, this paper presents an example for choosing a cloud service provider with respect to different criteria, based on the specific demand from a customer. The rest of the paper is organized as follows. In Sec. 2, characteristics and benefits of cloud computing, together with supporting technologies, are summarized. Section 3 discusses selection of computing services, with respect to cost and performance. In 040901-2 / Vol. 137, AUGUST 2015

Benefit of Cloud Computing for Manufacturing

2.1 Definition and Architecture of Cloud Computing. To date, a variety of definitions and descriptions of cloud computing have been developed, with the definition by the national institute of standards and technologies (NIST) being widely accepted: “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction” [33]. When applied to manufacturing, cloud computing refers to the execution of a specific application, with data transferred over the Internet, stored, and processed over a pool of distributed hardware, system, and software resources. Apart from storage and computing methods for operating computational clusters or grids, computing resources provisioned by a cloud are: (1) aggregated as a pool of computing resources, such as central processing unit (CPU), memory, disk storage, and network bandwidth over distributed devices; (2) monitored and controlled to provide realtime resource management for the provider; and (3) automatically and dynamically abstracted and assigned to multiple tenants without human intervention, and elastically provisioned and released. The first two features are controlled by cloud providers and hidden from the tenants, generating the perception that cloud resource is infinite when a service request is submitted. The third feature provides one of the most significant benefits for tenants when performing cloud computing: elasticity in resource allocation and access, which is discussed in detail in Sec. 2.2. The architecture of a cloud computing environment can be divided into three layers (see Fig. 2), corresponding to three types of cloud services: (1) IaaS aggregates computing resources (e.g., CPU, memory, disk storage, network I/O) at the infrastructure layer, then abstracts these resources as an individual machine presented to tenants, based on a virtualization technique. The benefit is that the tenant has exclusive and complete dominion to control the entire software stack. One example of IaaS is Amazon EC2 [34], which enables tenants to manage virtualized instances on aggregated servers to fulfil computing capacity, joined with storage capacity enabled by the Amazon simple storage service (Amazon S3) [35].

Fig. 2 Structure of cloud computing

Transactions of the ASME

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use

(2) Platform as a service (PaaS) provides resources at the platform layer, such as operating system (OS) and software development environment, to allow tenants to develop applications and manage their configuration settings. Google App Engine [36] is a resource that provides an application program interface for implementing web applications. (3) Software as a service (SaaS) provides computing resources (i.e., infrastructure and platform) as services to execute applications developed by tenants, achieving better performance, availability, and lower operating cost. For upper level service tenants, management or control of the resources provided by a lower service layer is not a concern. For instance, a tenant employing PaaS or SaaS does not need to be concerned about how computational resources are multiplexed and shared. In comparison, a tenant working in manufacturing processes that generate a large amount of process data and require real-time computing for feedback control of machine operations and part quality is best directed to work with IaaS providers, as it is the most-suited service for data transmission, storage, and processing. 2.2 Characteristics of Cloud Computing. The major characteristics of cloud computing include aggregated and shared resource pooling, worldwide distribution and ubiquitous network access, multitenancy, and service-orientation. These unique features differentiate cloud computing and cloud-based services from the traditional operational methods. For a cloud customer in the manufacturing industry, various benefits can be expected from using cloud-related services. Elasticity. Traditional hosting services generally provide a fixed number of resources in a fixed amount of time, meaning that users have limited ability to respond when their usage is rapidly changing [34]. In other words, users are required to provision computing resources for the peak usage, and allow the resource to remain idle at nonpeak times. In comparison, cloud computing can instantly respond to changes (scaling up or down) of computing requirements, allowing tenants to have control over resource usage by relieving resources at nonpeak times and requesting more resources at peak times. As an example, Amazon EC2 can be shown to be able to instantly respond to service request within minutes. To illustrate how elasticity would help reduce resource waste, assuming a manufacturing plant contains five production lines with all lines running in the daytime and only one running at night. Each production line requires 50 server-hours during the day for real-time online condition monitoring and other data to provide information for control, maintenance, and decisionmaking. If management wants to establish a data center, a total of 50  5 ¼ 250 server-hours or 250/24 ¼ 11 servers are required to satisfy the request at the peak time, although over half of the resource would stay idle for half of the time. Alternatively, if the plant chooses a cloud-based service, resource can be scaled up or down elastically according to the actual utilization during the day, resulting in 50/2  4 þ 50 ¼ 150 server-hours. Comparing to 250 server-hours for localized computing, this presents a 40% reduction in server-hours. Economy. In the above example, if the plant management decides to build their own data center, an estimated cost of 11  $4000 (unit price of a regular server) ¼ $44,000 would be incurred for up-front investment. Furthermore, such an equipment investment’s life span would be around 3 yr, according to the commonly used financial models in the United States [7]. If however the plant management considers renting a cloud service to build the data center, a unit price of less than 44,000/ (36  365  150) ¼ $0.022 per server-hour will make cloud-based computing an attractive option. Additionally, this unit price would actually include charges for renting computing resources, data Journal of Manufacturing Science and Engineering

storages, and data transferring in/out of the cloud. For example, Amazon EC2 charges $0.10 for one optimized computing CPU per hour, $0.05 per GB-month for data storage, and around $0.10 per GB for data transfer out of EC2 [34]. If the hypothetical manufacturing plan only transfers its data to EC2 (free for data transfer-in) for computation, and transfers out or stores the analysis results, a cloud service would be more economical than owning a data center. Besides economic benefits of computation as discussed above, cloud-based services also help in the following aspects [37,38]: (1) reduce up-front investment on hardware/software by utilizing the pay-as-you-go service model; (2) avoid maintenance fees on data centers, which require a highly controlled environment; and (3) reduce other costs such as operating cost, staff salary, etc. Risk Shift. The first risk in the local data center is closely related to misestimating the computing load. While overprovisioning of computing resources causes waste, underprovisioning may jeopardize the operation of a manufacturing process. Underprovisioning happens when new measurement capabilities are added to monitor the process, or servers are shut down and no back-up is provided. Two cases of underprovisioning were reported in Ref. [7], and the conclusion was that users will desert an underprovisioned service when the peak load equals the data center’s usable capacity [39]. With elasticity and command-based characteristics of cloud services, the workload risk will be shifted from a manufacturing plant to cloud providers. Another common risk occurring in computing equipment is data loss associated with hardware/software shuts down or damage. For such type of problems, cloud providers typically have a strategy for system recovery and data backup, which effectively reduces risks [40,41]. For a manufacturing plant, this translates into reduced cost and improved profit. 2.3 Supporting Technologies. Cloud-based storage and computing are performed on distributed resources over the Internet with two options: either a single piece of physical equipment is shared by multiple tenants, or multiple pieces of equipment are aggregated to serve one tenant (this is typically the case with data storage). Accordingly, there are two main techniques to serve the two functions: virtualization and distributed file storage. Virtualization. Virtualization is a technique that makes single pieces of physical equipment multiplexed by a small, privileged kernel, commonly referred to as a hypervisor, providing tenants with separate environments to execute their applications. Different virtualization techniques are applied to different working environments, but all share some common characteristics. Virtualization for IaaS is generally called “machine virtualization,” and allows a single physical machine to emulate the behavior of multiple machines, with the possibility to host multiple and heterogeneous OSs/platforms on the same hardware. The subdivision of a physical machine into several independent portions, also called virtual machines (VMs), is defined and managed by a hypervisor based on tenants’ service requests, while VMs are assigned to implement different tasks. Extending from machine virtualization, OS level virtualization is designed for PaaS, emulating multiple OS instances on a single OS. Also, application level virtualization in SaaS provides application developed for a virtual instruction set. Another important technique that helps link the physical host to outside tenants is network virtualization [37,42], which emulates network setups in software, allowing multiple VMs to run on the same physical host with their own intellectual property (IP) addresses, connected together in a virtual bridge topology [4]. Besides improving the level of physical source utilization and increasing the benefit to cloud service providers, virtualization provision, from a tenant’s perspective, is highly specialized and customized [43]. The execution environment may contain specific OS, developing languages or software. Also, virtualization AUGUST 2015, Vol. 137 / 040901-3

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use

provides tenants with privileges within their VMs without compromising the isolation or host integrity [4]. It should be noted, however, that some types of hardware resources are nonpartitionable, for example, the disk I/O, network I/O, and L2 cache. This means different VMs serving other tenants may compromise the nonsliceable resources, or in another words, the application in a VM cannot be executed in a fully isolated computing environment. Moreover, service requests from tenants are generally difficult to predict, making it difficult for a VM hypervisor to judge and assign corresponding resources and provide stable performance to each VM. Thus, a significant criterion for cloud service selection is the QoS. Current cloud providers only guarantee limited performance or QoS available, e.g., Amazon EC2 offers only guarantee on the availability of resources, but not on performance of its VMs [34]. Distribution and Interaction of Databases. The shift of data storage and computing away from desktops and local servers to distributed data centers across the Internet results in limitations as well as new opportunities. Unlike the traditional format, new data storage type in cloud computing, known as a distributed file system, partition files uploaded from tenants into several portions (also known as chunks) and stores them in a distributed server. As an example, the Google file system (GFS) [36] is a representative distributed file system that is specially designed to provide efficient and reliable access to data using large clusters of commodity servers. GFS architecture includes a hypervisor, multiple sliced servers (VMs) and multiple clients. The hypervisor running in the driver domain is responsible for assigning storage resources and managing uploaded data. Each uploaded file is split into multiple subfiles, stored in different chunk servers (VMs) in the guest domain, while each subfile is identified by a subfile handle, which is a globally unique 64-bit number that is assigned by the hypervisor when the subfile is first created, as shown in Fig. 3.

3

Selection of Cloud Service

Several cloud providers are presently in the market, providing services targeted at varying types of demands, from enterprise to individual tenants. As an example, Amazon EC2 provides the ten family services, with each family containing different types of services for each request load (see Table 1) [34]. In general, selecting a most appropriate service per tenant-specific requirements is a challenge to be faced. Previous efforts have tried to find a standardized method to measure and compare different cloud services given different application scenarios (e.g., manufacturing), based on a set of business-relevant key performance indicators, such as accountability, agility, assurance of service, cost, performance, security, privacy, and usability. This section provides a review of cloud service selection methods and classifies them into two groups:

Fig. 3

Structure of file distribution system

040901-4 / Vol. 137, AUGUST 2015

Table 1 Configuration and cost of EC2 instances

Family General Compute optimized GPU instance Memory optimized Storage optimized

Type

vCPU

Memory (GB)

Storage (GB)

Cost/h

m3.large c3.large g2.2large r3.large i2.xlarge

2 2 8 2 4

7.5 7 26 15 14

32 32 15 32 800

$0.140 $0.105 $0.650 $0.175 $0.853

utilization cost-based and performance quality-based. After a discussion on cost calculation and optimization, QoS indicators are summarized and common service selection approaches, such as AHP and utility function are reviewed. 3.1 Utilization Cost Calculation and Optimization. Cloudbased service provides competent computing capability with elasticity advantages, and becomes more attractive if the cost is also competitive. Kondo et al. [44] compared and contrasted the performance and monetary cost-benefits of clouds for desktop grid applications, ranging in computational size and storage. Assunc¸~ao et al. [4] investigated the benefits that organizations can reap by using cloud computing providers to augment the computing capacity of their local infrastructure, and sought for an optimal scheduling strategy of fully utilizing local and remote resources by minimizing cost and maximizing computing performance. Deelman et al. [45] researched the cost benefit of cloud computing by a case study about Montage calculation using Amazon EC2 and S3 as computational and storage resources, in the context of different execution situations (i.e., computing only or joint computing and storage). It was found that cloud computing offers a costeffective solution for data-intensive applications, since the storage costs were insignificant as compared to the CPU costs for a dataintensive application with a small computational granularity. It is not easy to directly compare prices of cloud services, due to the unique features associated with a specific QoS. Even the same provider offers different VMs (e.g., instances in EC2) when serving different potential markets. To tackle this problem, a common method is to normalize the cost according to the configuration volume, i.e., the cost of one unit of CPU, memory, storage, and network bandwidth. Garg et al. [32] indicates that the expression p CPUa  RAMb  DATAc  NETd

(1)

can be applied to calculate the normalized price of a VM with price p, while a–d are weights for each resource attribute. The weights are initialized artificially, varying from application to application, according to the emphasis and ranking of the importance of resource attributes. Generally, cloud providers offer two provisioning plans for computing resources: reservation and ondemand. A reservation plan works for the estimated or standard computing load, while on-demand serves for the unpredicted computing load change. The cost of utilizing a reservation plan is usually less than an on-demand plan. For example, the unit price of m3.large is $0.140 and $0.043 for an on-demand and 3 yr contract, respectively. Chaisiri et al. [46,47] proposed an optimal cloud resource provisioning algorithm by formulating a stochastic programming model to automatically seek for a best advance reservation of resources, taking the service demand and price uncertainty into full consideration. The utilization cost is first generalized into the multiple stage formulation, and then expressed as a function with two parts representing reserved/expected and uncertainty, and finally solved by searching for the minimum of the function according to a set of boundaries. 3.2 Performance-Based Service Ranking and Selection. Cloud-based service is performed over the Internet, making it Transactions of the ASME

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use

Table 2 Service performance indicators Performance indicator

Description Quantifying the time taken by the cloud providers to respond for service request, e.g., EC2 response time is several minutes Quantifying the difference between user experience and promised service level by cloud providers Qualitatively measuring users’ usability affected by changes in service in the context of fast evolution of cloud-based service Qualitatively measuring the ability of a service to interact with other services offered by same or different providers Quantifying the variability in the performance of a service, which is largely and easily affected by the total VMs and computing load Quantifying the probability of a service operating without failure for assigned computing load and promised service level Quantifying the maximum tasks can be handled and completed by a service per unit time Qualitatively measuring the ability to handle a large number of application requests simultaneously

Response time Accuracy Transparency Interoperability Stability Reliability Throughput Scalability

similar to a web service: SaaS is an almost web-based application while IaaS provides a virtual environment for platform deployment, monitored by web-based monitoring tools. As a result, the majority of current research on cloud service evaluation is derived from the web service evaluation methods. A common way is to extract and select important service performance indicators, based on which an analytical model can be established to rank alternative services by quantifying the intrinsic attributes and comparing them to the overall goal. To establish the analytical model, AHP method and utility function are reviewed, considering different performance evaluation attributes. Performance Indicators. For cloud computing especially in the context of IaaS, the performance can be evaluated based on both qualitative and quantitative indicators, with the most important ones summarized in Table 2 [32]. The problem becomes how to aggregate and fuse the information expressed by these attributes to quantitatively compare several service candidates to fit the goals and constraints specified by the tenants. AHP-Based Selection. The AHP is a wide-spread structured technique for organizing and analyzing complex decisions, especially for group decision-making, based on mathematics and psychology. AHP provides a comprehensive and rational framework for structuring a decision-making problem, evaluating alternative solutions by quantifying intrinsic attributes and relating them to the overall goal, and providing a best-suited solution. An AHPbased rating process comprises three main steps: decomposition, comparative judgment, and synthesis [48]. In the decomposition stage, the decision problem is decomposed into a hierarchy of more easily comprehended subproblems. The purpose of this step is to determine the layer and elements/attributes contained in each layer of the hierarchy. An example of AHP hierarchy of cloud service selection with respect to performance attributes is shown in Fig. 4, which contains three layers with service alternatives as the bottom layer, and attribute as the middle layer under the overall goal. In the comparative judgment stage, pair comparisons between elements are conducted, with respect to their impact on elements above them in the hierarchy. Two types of numerical priorities are assigned to connect each element in the hierarchy: {wA1, wA2,…,wAn,…, wNn} represents the influence of alternatives with respect to the attributes; and {w1, w2,…,wn} denotes the priority with respect to the overall selection. Finally, numerical factors are calculated for each of the alternatives, which represent the alternatives’ relative ability to achieve the decision goal. The priority of each alternative in Fig. 4 can be calculated as 2

PA

3

2

wA1 wA2 wA3    wAn

6 7 6 6 PB 7 6 wB1 wB2 wB3    wBn 6 7 6 6. 7 ¼ 6. 6 .. 7 6 .. 4 5 4 PN

wN1 wN2 wN3    wNn

32

w1

A framework for evaluating cloud offerings and ranking them based on their ability to meet tenants’ QoS requirement is proposed in Refs. [32,49], by comparing the service alternative via performance attributes and generating the global ranking of service based on AHP. This work also addresses the challenge of different dimensional units of various QoS attributes by providing a uniform way to evaluate the relative ranking of cloud services for each type of QoS attribute. Research in Ref. [50] explored the techniques of aggregating and evaluating the multilevel QoS parameters of cloud services, which facilitate the ranking and selection of IaaS and SaaS services according to users’ requirements. An AHP hierarchy of a cloud service weighting model is defined, in which QoS parameters of both IaaS and SaaS are considered as the ranking criteria and are layered and categorized based on their influential relations. Similar work can be found in Ref. [51], which introduced an AHP-based SaaS service selection approach to objectively score and rank services. Utility Function Based Selection. Unlike AHP that focuses on the relative importance of decision criteria through pair comparison, a utility function quantifies the preferences of a decision maker and aggregates several of the decision maker’s degrees of satisfaction toward a particular criterion. Zeng et al. [52] discussed cloud service selection depending on the trade-off between the maximized gain and minimized cost, through two steps: (1) searching all alternatives that satisfy tenants’ requirements; (2) finding the optimal service from the candidates to reach the tradeoff between performance and cost. Limam and Boutaba [53] proposed a reputation-aware service selection framework to rate SaaS services, with the aim of reducing the time and risk of the selection and utilization of software services. The reputation depends on the feedback of users, which is formed by aggregating the perceived utility of the customer’s baseline satisfaction and the perceived disconfirmation. Then, the utility is calculated according to quality monitoring results. In Table 3, commonly used methods for service selection are summarized.

3

76 7 76 w2 7 76 7 76 . 7 76 .. 7 54 5 wn

Journal of Manufacturing Science and Engineering

(2) Fig. 4 AHP hierarchy for performance-based cloud service selection

AUGUST 2015, Vol. 137 / 040901-5

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use

Table 3 Comparison of service selection methods Method

Input

Output

When to consider

References

AHP

Attribute and decision matrix

Weights of alternatives

[48,50,51]

Utility function

Decision matrix

A subset of alternatives

When there are multiple attributes and limited explicit selection options When it is not easy to aggregate or compare attributes

4

Limitations of Cloud Computing

The IoT for CM allows for the collection of real-time process and condition data from machine equipment networked across the manufacturing enterprise. These data and information provide a holistic perspective of the operational state of the equipment for remote monitoring, diagnosis, and prognosis. One significant nature involved in this process to perform real-time monitoring is the large variety of data types and large amount of data, due to a variety of measurement techniques with high sampling rate employed for CM. While processing of different data types can be facilitated by machine-to-machine communication techniques such as MTConnect [21], massive data to be processed pose a major challenge on the current cloud computing technologies, when applied to manufacturing. A good understanding of the limitations of cloud computing on computational and network performance can help in: • • •

selecting computing services to ensure the quality of data analysis selecting sensor type to achieve optimal trade-off between data resolution and data processing quality enabling intelligent data transmission, i.e., transmitting collected raw measurement data or extracted features performed by local agents, in view of the energy efficiency [54–56].

For information and knowledge sharing through crowdsourcing for cloud-based design, monitoring, and decision-making, cybersecurity is a significant challenge. This refers to protecting IP, sensitive information, and the security of devices and assets networked in the IoT [57]. Existing infrastructure, such as supervisory control and data acquisition networks, can be a significant vulnerability, given its designed function [58]. In addition, challenge lies in the filtering of individual enterprise-sensitive information while maximizing the pooling of information sharing. 4.1 Computational Performance. Virtualization brings a number of challenging issues to maintaining stable performance of each VM. The most popular option among current cloud services is to search for QoS-based resource management to trade-off execution quality by the assigned resources via a load balancing mechanism or high availability mechanism [59]. There are a number of reported work on dynamic resource management, for instance, based on the game theory [60] or k-means clustering [61]. However, these efforts only address the scaling problem of one resource or a single-tier. Sotomayor et al. [62] found that resource provisioning encompasses three dimensions as hardware resources, software resources, and the time during which those resources must be guaranteed to be available. A complete resource provisioning model is also needed to allow resource consumers to specify requirements across these three dimensions. Manvi and Shyam [31] surveyed several resource provisioning models and evaluated them with proposed performance metrics including reliability, QoS, delay and control overhead. Buyya and Ranjan [63] pointed out various challenges in addressing the problem of enabling QoS-oriented resource management in distributed servers to satisfy competing applications’ demand for computing services, including: 040901-6 / Vol. 137, AUGUST 2015

[52,53]

(1) How to ensure that requests finish their execution within estimated completion times in the presence of resource performance fluctuations, especially in the large-scale, decentralized and distributed systems? (2) How to fulfil the movement or transfer of large volumes of data, in the scenario that data are stored in distributed devices? (3) How to develop resource prediction models for facilitating proactive scaling in the cloud so that hosted applications are able to withstand the variation in workload with the least drop in performance and availability? Iosup et al. [5] attempted to obtain an answer experimentally for the question of whether the performance of clouds is sufficient for executing many-tasks based on scientific computing, from the perspective of performance metrics, including resource acquisition/release time, computing performance, I/O performance, and reliability. The computational performance evaluation results indicated that the floating point and double-precision float operations are six to eight times slower than the theoretical maximum. A potential reason for this is the overrun or thrashing of the memory caches by the working set of other applications sharing the same physical machines. The performance and cost of clouds were also compared, with workload traces taken from grids and parallel production infrastructure. A conclusion was reached that the performance of all the cloud environments investigated is low for high demand usage and should only be considered when resources are needed instantly and temporarily. Similar work is found in Refs. [64,65]. Huber et al. [64] executed several benchmarks to analyze the performance of native and virtualized systems. The results showed that the performance overhead for CPU and memory virtualization were up to 5% and 40%, respectively. The performance of floating point operations were demonstrated to be within 3–5% of overall drops, up to a 20% drop for some benchmarks. The main cause for the overall performance drop was suggested to stem from the allocation of large memory areas. Although different experiments delivered different results, it is found that the performance overhead of VM would increase, and the overhead is determined by specific test bed and resource allocation technique. Schad et al. [66] has carried out a study of the performance in terms of CPU efficiency and memory speed on Amazon EC2. Experiments were performed at different levels (e.g., single EC2 instance, multiple instances, and different locations), and different types of EC2 instances were taken into consideration. The experimental results indicated that regardless of the CPU efficiency or memory speed, the performance is far less stable than one would expect. For example, the variance for memory speed is around 8–10%, while the variance of a local physical cluster is only 0.3%. A possible reason is the different system types used by virtual instances. Also, the variance on the cloud was compared with the variance on a local physical cluster, which indicated that the same MapReduce job suffered from a significantly higher performance variance on EC2. 4.2 Network Bottleneck. Besides sharing resources such as CPU and storage facilities that has an effect on cloud computing performance, sharing I/O resources also affects the network performance. Resources of network links and bandwidth are shared Transactions of the ASME

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use

by multiple VMs, which are enabled by the network I/O virtualization technologies. A common way to achieve I/O virtualization is VM device queues, which is to relieve the host CPU from multiplexing and controlling the packets to and from VMs. An advantage of VM device queues is that it can reduce the cost of network devices and the port density requirements on switches. But I/O virtualization also introduces following challenges: (1) Complexity of packet multiplexing: when a data packet arrives, the server must determine that which VM should the file be delivered to based on its header. What’s more, kinds of network protocols make it more difficult to identify the headers and look up the bridging table. For example, Barham et al. [67] pointed out that 30–40% of execution time for a network to transmit or receive operation was spent in the VM monitor (VMM), for the remapping of addresses contained in the transmitted data package. (2) Increasing line rate and workloads: A line rate of more than 10 Gbps is expected to appear in future data centers. The increased workload can degrade the bottlenecks of either CPUs or memories due to the increased communication between the driver domain (VMM) and guest domains (VMs). References [68] and [69] demonstrated that the overhead of CPUs and latency increase with the package rate. Shafer [70] has evaluated the effect of I/O virtualization on storage and network performance, on a variety of configurations of private clouds based on Eucalyptus. Three different configurations were compared and discussed: a local disk accessed directly from the host domain, a local disk mapped directly into the guest domain, and a remote disk located across the network. For storage evaluation, it was found that the storage bandwidth compared to the nonvirtualized environment (110 MB/s bandwidth for both file reading and writing) is very poor, and write and read bandwidth decreased by 98% and 38% when accessing local storage. When accessing remote storage, the write and read bandwidth decreases by 83% and 54%, respectively. The authors found that the poor performance of the virtualized environment was due to the synchronous mechanism where the system sends a single request disk, waits for the reply, and then issues another request, as opposed to queuing requests. For network evaluation, it is indicated that the bandwidth to the driver domain is unexpectedly lower than bandwidth to a different host, and achieved bandwidth fails to saturate a gigabit Ethernet link. Bourguiba et al. [71] has investigated the system behavior under very high load to determine the maximum capacity and which component caused this bottleneck. One driver domain, as well as four guest domains were established and instantiated using Xen, and reception/forwarding throughput of VM with regard to different data packets’ rate. The experimental results indicated

that the VM throughput is almost ten times lower compared to the driver domain one. The most likely reason for the finding is the costly communication between driver domain and VMs in guest domain, which involves the redundant request by driver domain for a grant to access the VM and the grant revoking by the VM. Meanwhile, memory latency was provided to be the major issue causing the data transmission bottleneck. A similar study about I/O virtualized network performance evaluation is found in Ref. [68], which presented detailed work on performance measurement and analysis of network I/O applications in a virtualized single host. The performance was evaluated with respect to different resource usage patterns (i.e., application/ file size and workload) and different numbers of VMs. For both small-sized and large-sized file applications, the peak performance occurred under applied workload of 50–60%. But for a small-sized file application such as 1 and 10 kB, obvious performance degradation occurred at workload rates higher than 60%. A reason given by the authors for this phenomenon is that the performance is bounded by the CPU resources. This is because the CPU consumed much more time to process network requests and establishes transmission control protocol (TCP) connections to deal with the fast arrival of network packets when the workload rate is high. Experiments also demonstrated that the network performance bottleneck is more severe and more allocation time is needed to handle an input when a high system workload is executed on a Xen VMM platform [69]. Especially, when dealing with small packets, the throughput is even lower, due to the reason that the software stack does not have enough CPU resources to process. The above experiments and analysis about network performance under I/O virtualization were performed on VMs instantiated on a single physical machine. Experimental insights and measurements for understanding the effect of resource sharing on multiple VMs running on multicore platforms were discussed in Ref. [68]. Results indicated that multiple applications can share multicore virtualized nodes without undue performance effects, and the latency for a write/read operation does not change much as the number of VMs increases. This is because VMs share the communication resource using an InfiniBand interconnection which helps in avoiding frontend–backend communication overheads. However, contrasting with performance demonstrated by VMs on a single machine, latencies increase exponentially with the package size due to bandwidth saturation. It is considered that, as the link becomes saturated with increasing package size, the average bandwidth attained by each VM decreases. As experienced by single root VMs, the amount of CPU overhead has been demonstrated to be proportional to the amount of I/O processing performed [72,73]. In Table 4, representative research efforts on performance evaluation under the effect of virtualization are summarized. The network bottleneck discussed above refers to the current cloud computing technologies, under the assumption that local

Table 4 Selected research and experiments related to computational and network performance evaluation under the effect of virtualization Performance evaluation Computational performance Network performance

Representative research

Description

[5,64,65]

Evaluates the computing performance by comparing performing high performance usage to local clusters Evaluates the performance variance of EC2 with respect to CPU performance and memory speed Evaluates the effect of I/O virtualization on storage and network performance over clouds based on Eucalyptus Investigates the bottleneck of I/O virtualized network performance and its root cause under high workload Investigates the degradation network performance with high package transmission rate in a virtualized single host Evaluates the network performance on VMs running on multicore platforms by comparing with single host

[66] [70] [71] [68,69] [72,73]

Journal of Manufacturing Science and Engineering

AUGUST 2015, Vol. 137 / 040901-7

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use

facilities (e.g., data acquisition network employed on the shop floor) are connected to the cloud via up-to-date communication technologies or high-speed Internet. However, it is not uncommon in practice that high-speed Internet connection is still not available everywhere and the read/write speed can be severely limited. This limits the data transfer speed, which is critical to in situ condition monitoring via the cloud, and affects subsequent data analysis performed in the cloud. A possible solution to this problem is to perform the preliminary signal processing at local agents, to extract features with compact data size, which are then transmitted to the cloud for further analysis and decision-making. Such a scenario will alleviate the constraint for collaborative intelligence-based design in CM. This is because the choice of the design models, as opposed to the development of the models themselves, does not involve a large quantity of computation steps and thus can be efficiently performed in the cloud, whereas the transmission of models to local manufacturing facilities is not time critical, therefore does not require high-speed network connection. 4.3 Other Challenges. Information on further challenges in cloud-based computing include information security [74,75] and data lock-in. Information security, as a major concern in the adoption of cloud services, is one of the most important issues being critically examined by individuals and/or enterprises who would like to use cloud computing as a service infrastructure [76]. As the cloud is an Internet enabled service infrastructure, all the concerns related to security on the Internet are also encountered by the cloud. As a result, sensitive information or security data can be accessed through a hacker attack. This requires more advanced network traffic encryption techniques such as secure socket layer and the transport layer for security [77]. Additionally, the special environment of cloud-based service increases its vulnerability. Since multitenancy is one of the major characteristics of cloud computing, data from various users may be allocated and stored at the same location or even same physical machine. This increases the risk of data being accessed by other users that attacks the loop-hole at a physical or software level. To mitigate such problem, service providers should ensure the absolute accessibility by data owners. However, security is not only a concern for cloud service providers but also for the service consumers, as the security responsibilities vary with different service types. Amazon EC2, as a typical representative of IaaS, includes vendor responsibility for security up to the hypervisor. This means that they can only address security controls such as physical security, environmental security, and virtualization security. The consumer is responsible for the security controls related to platform, application, and data [78]. In addition to cybersecurity in the cloud computing environment, leakage of IP and sensitive information are of major concern when using crowdsourcing for collaborative design and decision-making. Anonymizing sensitive information would help reduce the concern and encourage the development of crowdsourcing for CM. However, there has not been a standard to automatically determine which aspects of data, knowledge, and experience need to be shared to minimize the negative effect brought by collaborative information sharing. This also highlights the need for standards to be applied for data management. In addition, the development of automated and intelligent data analysis methodologies can provide more efficient decision support by discovering meaningful data and avoiding sensitive information. Besides information security, other challenges in cloud computing have been summarized in Ref. [7], include data lock-in, which means that consumers cannot easily extract their data and programs from one site to run on another. This is mainly due to the fact that the computation platforms and programs, originally developed for a special physical devices environment, need to be modified when adapted to other physical devices [7]. Data integrity also arises with the special data storage and computing manner in cloud-based service where multiple databases and multiple applications are running on a distributed system. Data from one 040901-8 / Vol. 137, AUGUST 2015

service user may be stored at distributed physical devices, and pulled into computation and analysis on different devices. Thus transactions across multiple data sources need to be handled correctly in a fail safe manner, in order to maintain data integrity.

5

Conclusions

Cloud computing has increasingly become a compelling paradigm for managing and delivering services over the Internet in various industrial and commercial fields. Distributed computing resources are aggregated and provisioned dynamically per service requests from consumers, allowing them to be free from up-front investment and related issues. This development is quickly changing the landscape of information technology, turning utility computing into reality, and attracting more individuals and enterprises as participants. A comprehensive review of cloud computing is provided in this paper, with the purpose to present an informative summary for researchers and practitioners in the manufacturing industry who plan to adopt cloud computing as the computational platform. Cloud-based service is evaluated from the perspective of resource provision manner and economy. Models to evaluate utilization cost and service performance are reviewed. It is concluded that virtualization, as a key technique to achieving multitenant in cloud computing, inevitably affects the computational performance and network communication performance due to the reason that multiple VMs share the same physical devices. Experiments conducted under different conditions and situations are reviewed to give the readers a quantitative impression of the limitations and bottlenecks of current cloud computing techniques.

References [1] Buyya, R., Yeo, C. S., and Venugopal, S., 2008, “Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering It Services as Computing Utilities,” 10th IEEE International Conference on High Performance Computing and Communications, Dalian, China, pp. 5–13. [2] Rosenthal, A., Mork, P., Li, M. H., Stanford, J., Koester, D., and Reynolds, P., 2010, “Cloud Computing: A New Business Paradigm for Biomedical Information Sharing,” J. Biomed. Inf., 43(2), pp. 342–353. [3] Xu, X., 2012, “From Cloud Computing to Cloud Manufacturing,” Rob. Comput.-Integr. Manuf., 28(1), pp. 75–86. [4] De Assunc¸~ao, M. D., Di Costanzo, A., and Buyya, R., 2009, “Evaluating the Cost-Benefit of Using Cloud Computing to Extend the Capacity of Clusters,” 18th ACM International Symposium on High Performance Distributed Computing, Munich, Germany, pp. 141–150. [5] Iosup, A., Ostermann, S., Yigitbasi, M. N., Prodan, R., Fahringer, T., and Epema, D. H., 2011, “Performance Analysis of Cloud Computing Services for Many-Tasks Scientific Computing,” IEEE Trans. Parallel Distrib. Syst., 22(6), pp. 931–945. [6] Garcıa-Valls, M., Cucinotta, T., and Lu, C., 2014, “Challenges in Real-Time Virtualization and Predictable Cloud Computing,” J. Syst. Architect., 60(9), pp. 726–740. [7] Armbrust, M., Fox, A., Griffith, R., Joseph, A. D., Katz, R., Konwinski, A., Lee, G., Patterson, D., Rabakin, A., Stoica, I., and Zaharia, M., 2010, “A View of Cloud Computing,” Commun. ACM, 53(4), pp. 50–58. [8] Yang, Y., Gao, R., Fan, Z., Wang, J., and Wang, L., 2014, “Cloud-Based Prognosis: Perspective and Challenge,” ASME Paper No. MSEC2014-4155. [9] Wang, L., 2013, “Machine Availability Monitoring and Machining Process Planning Towards Cloud Manufacturing,” CIRP J. Manuf. Sci. Technol., 6(4), pp. 263–273. [10] Wu, D., Greer, M. J., Rosen, D. W., and Schaefer, D., 2013, “Cloud Manufacturing: Strategic Vision and State-of-the-Art,” J. Manuf. Syst., 32(4), pp. 564–579. [11] Asadi, M., and Goldak, J. A., 2014, “An Integrated Computational Welding Mechanics With Direct-Search Optimization for Mitigation of Distortion in an Aluminum Bar Using Side Heating,” ASME J. Manuf. Sci. Eng., 136(1), p. 011007. [12] Tutar, M., and Karakus, A., 2013, “Computational Modeling of the Effects of Viscous Dissipation on Polymer Melt Flow Behavior During Injection Molding Process in Plane Channels,” ASME J. Manuf. Sci. Eng., 135(1), p. 011007. [13] Ren, L., Zhang, L., Wang, L., Tao, F., and Chai, X., 2014, “Cloud Manufacturing: Key Characteristics and Applications,” Int. J. Comput. Integr. Manuf., pp. 1–15. [14] Wang, L., Holm, M., and Adamson, G., 2010, “Embedding a Process Plan in Function Blocks for Adaptive Machining,” CIRP Ann.–Manuf. Technol., 59(1), pp. 433–436. [15] Ganguly, V., Schmitz, T., Graziano, A., and Yamaguchi, H., 2013, “Force Measurement and Analysis for Magnetic Field–Assisted Finishing,” ASME J. Manuf. Sci. Eng., 135(4), p. 041016. [16] Shu, S., Cheng, K., Ding, H., and Chen, S., 2013, “An Innovative Method to Measure the Cutting Temperature in Process by Using an Internally Cooled Smart Cutting Tool,” ASME J. Manuf. Sci. Eng., 135(6), p. 061018.

Transactions of the ASME

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use

[17] Rao, P., Bukkapatnam, S., Beyca, O., Kong, Z. J., and Komanduri, R., 2014, “Real-Time Identification of Incipient Surface Morphology Variations in Ultraprecision Machining Process,” ASME J. Manuf. Sci. Eng., 136(2), p. 021008. [18] Ren, L., Zhang, L., Tao, F., Zhao, C., Chai, X., and Zhao, X., 2013, “Cloud Manufacturing: From Concept to Practice,” Enterp. Inf. Syst., 9(2), pp. 1–24. [19] Wang, L., Wang, X. V., Gao, L., and Vancza, J., 2014, “A Cloud-Based Approach for WEEE Remanufacturing,” CIRP Ann.–Manuf. Technol., 63(1), pp. 409–412. [20] Wang, X., and Xu, X., 2013, “ICMS: A Cloud-Based Manufacturing System,” Cloud Manufacturing, Springer, London, pp. 1–22. [21] Wu, D., Rosen, D. W., and Schaefer, D., 2014, “Cloud-Based Design and Manufacturing: Status and Promise,” Cloud-Based Design and Manufacturing (CBDM): A Service-Oriented Product Development Paradigm for the 21st Century, D. Schaefer, ed., Springer, London, pp. 1–24. [22] Wang, X., and Xu, X., 2013, “An Interoperable Solution for Cloud Manufacturing,” Rob. Comput.-Integr. Manuf., 29(4), pp. 232–247. [23] Wu, D., Thames, J. L., Rosen, D. W., and Schaefer, D., 2013, “Enhancing the Product Realization Process With Cloud-Based Design and Manufacturing Systems,” ASME J. Comput. Inf. Sci. Eng., 13(4), pp. 1–12. [24] Lee, J., Lapira, E., Bagheri, B., and Kao, H., 2013, “Recent Advances and Trends in Predictive Manufacturing Systems in Big Data Environment,” Manuf. Lett., 1(1), pp. 38–41. [25] Zhang, Q., Cheng, L., and Boutaba, R., 2010, “Cloud Computing: State-of-theArt and Research Challenges,” J. Internet Serv. Appl., 1(1), pp. 7–18. [26] Jula, A., Sundararajan, E., and Othman, Z., 2014, “Cloud Computing Service Composition: A Systematic Literature Review,” Expert Syst. Appl., 41(8), pp. 3809–3824. [27] Dillon, T., Wu, C., and Chang, E., 2010, “Cloud Computing: Issues and Challenges,” 24th IEEE International Conference on Advanced Information Networking and Applications (AINA), Apr. 20–23, Perth, Australia, pp. 27–33. [28] Sakellari, G., and Loukas, G., 2013, “A Survey of Mathematical Models, Simulation Approaches and Testbeds Used for Research in Cloud Computing,” Simul. Modell. Pract. Theory, 39, pp. 92–103. [29] Agrawal, D., Das, S., and Abbadi, A. E., 2011, “Big Data and Cloud Computing: Current State and Future Opportunities,” 14th International Conference on Extending Database Technology, Uppsala, Sweden, pp. 530–533. [30] Nurmi, D., Wolski, R., Grzegorczyk, C., Obertelli, G., Soman, S., Youseff, L., and Zagorodnov, D., 2009, “The Eucalyptus Open-Source Cloud-Computing System,” 9th IEEE/ACM International Symposium on Cluster Computing and the Grid, Shanghai, China, May 18–20, pp. 124–131. [31] Manvi, S., and Shyam, G., 2014, “Resource Management for Infrastructure as a Service (IaaS) in Cloud Computing: A Survey,” J. Network Comput. Appl., 41, pp. 414–440. [32] Garg, S. K., Versteeg, S., and Buyya, R., 2013, “A Framework for Ranking of Cloud Computing Services,” Future Gener. Comput. Syst., 29(4), pp. 1012–1023. [33] Mell, P., and Grance, T., 2009, “The NIST Definition of Cloud Computing,” Natl. Inst. Stand. Technol. Special Publication, 800-145, [34] Amazon EC2: (http://aws.Amazon.com/ec2/). [35] Palankar, M. R., Iamnitchi, A., Ripeanu, M., and Garfinkel, S., 2008, “Amazon S3 for Science Grids: A Viable Solution?,” 2008 International Workshop on Data-Aware Distributed Computing, Boston, MA, pp. 55–64. [36] Google Apps Engine: (https://cloud.google.com/appengine/). [37] Popa, L., Kumar, G., Chowdhury, M., Krishnamurthy, A., Ratnasamy, S., and Stoica, I., 2012, “FairCloud: Sharing the Network in Cloud Computing,” ACM SIGCOMM 2012 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication, Helsinki, Finland, pp. 187–198. [38] Greenberg, A., Hamilton, J., Maltz, D. A., and Patel, P., 2008, “The Cost of a Cloud: Research Problems in Data Center Networks,” ACM SIGCOMM Comput. Commun. Rev., 39(1), pp. 68–73. [39] Kumar, K., and Lu, Y., 2010, “Cloud Computing for Mobile Users: Can Offloading Computation Save Energy,” Computer, 43(4), pp. 51–56. [40] Brender, N., and Markov, I., 2013, “Risk Perception and Risk Management in Cloud Computing: Results From a Case Study of Swiss Companies,” Int. J. Inf. Manage., 33(5), pp. 726–733. [41] Yu, S., Wang, C., Ren, K., and Lou, W., 2010, “Achieving Secure, Scalable, and Fine-Grained Data Access Control in Cloud Computing,” 2010 IEEE INFOCOM, San Diego, CA, Mar. 14–19, pp. 1–9. [42] Mergen, M., Uhlig, V., Krieger, O., and Xenidis, J., 2006, “Virtualization for HighPerformance Computing,” ACM SIGOPS Oper. Syst. Rev., 40(2), pp. 8–11. [43] Ibrahim, S., He, B., and Jin, H., 2011, “Towards Pay-as-You-Consume Cloud Computing,” 2011 IEEE International Conference on Services Computing, Washington, DC, July 4–9, pp. 370–377. [44] Kondo, D., Javadi, B., Malecot, P., Cappello, F., and Anderson, D. P., 2009, “Cost-Benefit Analysis of Cloud Computing Versus Desktop Grids,” 2009 IEEE International Symposium on Parallel and Distributed Processing, Rome, Italy, May 23–29, pp. 1–12. [45] Deelman, E., Singh, G., Livny, M., Berriman, B., and Good, J., 2008, “The Cost of Doing Science on The Cloud: The Montage Example,” 2008 ACM/ IEEE Conference on Supercomputing, Austin, TX, Nov. 15–21, p. 50. [46] Chaisiri, S., Lee, B. S., and Niyato, D., 2012, “Optimization of Resource Provisioning Cost in Cloud Computing,” IEEE Trans. Serv. Comput., 5(2), pp. 164–177. [47] Chaisiri, S., Lee, B. S., and Niyato, D., 2009, “Optimal Virtual Machine Placement Across Multiple Cloud Providers,” IEEE Asia-Pacific Services Computing Conference, Kuala Lumpur, Malaysia, Dec. 7–11. [48] Sun, L., Hussain, F. K., Hussain, O. K., and Chang, E., 2014, “Cloud Service Selection: State-of-the-Art and Future Research Directions,” J. Network Comput. Appl., 45, pp. 134–150.

Journal of Manufacturing Science and Engineering

[49] Singh, R., Sharma, U., Cecchet, E., and Shenoy, P., 2010, “Autonomic MixAware Provisioning for Non-Stationary Data Center Workloads,” 7th International Conference on Autonomic Computing, Washington, DC, pp. 21–30. [50] Karim, R., Ding, C., and Miri, A., 2013, “An End-to-End QoS Mapping Approach for Cloud Service Selection,” IEEE 9th World Congress on Services, Santa Clara, CA, June 28–July3. [51] Godse, M., and Mulik, S., 2009, “An Approach for Selecting Software-as-aService (SaaS) Product,” IEEE International Conference on Cloud Computing, Bangalore, India. [52] Zeng, L., Zhao, Y., and Zeng, J., 2009, “Cloud Serivces and Service Selection Algorithm Research,” First ACM/SIGEVO Summit on Genetic and Evolutionary Computation, Shanghai, China, pp. 1045–1048. [53] Limam, N., and Boutaba, R., 2010, “Assessing Software Service Quality and Trustworthiness at Selection Time,” IEEE Trans. Software Eng., 36(4), pp. 559–574. [54] Yan, R., Sun, H., and Qian, Y., 2013, “Energy-Aware Sensor Node Design With Its Application in Wireless Sensor Networks,” IEEE Trans. Instrum. Meas., 62(5), pp. 1183–1191. [55] Yan, R., Fan, Z., Gao, R., and Sun, H., 2013, “Energy-Efficient Sensor Data Gathering in Wireless Sensor Networks,” Sens. Mater., 25(1), pp. 31–44. [56] Ball, D., Yan, R., Licht, T., Deshmukh, A., and Gao, R., 2008, “A Strategy for Decomposing Large-Scale Energy Constrained Sensor Networks for System Monitoring,” Prod. Plann. Control, 19(4), pp. 435–447. [57] Wells, L. J., Camelio, J. A., Williams, C. B., and White, J., 2014, “Cyber-Physical Security Challenges in Manufacturing Systems,” Manuf. Lett., 2(2), pp. 74–77. [58] Larkin, R. D., Lopez, J., Jr., Butts, J. W., and Grimaila, M., 2014, “Evaluation of Security Solutions in the SCADA Environment,” ACM SIGMIS Database, 45(1), pp. 38–53. [59] Huang, Q., Yang, C., Liu, K., Xia, J., Xu, C., Li, J., and Li, Z., 2013, “Evaluating Open-Source Cloud Computing Solutions for Geosciences,” Comput. Geosci., 59, pp. 41–52. [60] Teng, F., and Magoules, F., 2010, “A New Game Theoretical Resource Allocation Algorithm for Cloud Computing,” Advances in Grid and Pervasive Computing, Springer, Berlin, Heidelberg, pp. 321–330. [61] Quiroz, A., Kim, H., Parashar, M., Gnanasambandam, N., and Sharma, N., 2009, “Towards Autonomic Workload Provisioning for Enterprise Grids and Clouds,” 10th IEEE/ACM International Conference on Grid Computing, Victoria, Australia, Oct. 13–15, pp. 50–57. [62] Sotomayor, B., Montero, R. S., Llorente, I. M., and Foster, I., 2009, “An Open Source Solution for Virtual Infrastructure Management in Private and Hybrid Clouds,” IEEE international Conference on Internet Computing, Cancouver, Canada, pp. 78–89. [63] Buyya, R., and Ranjan, R., 2010, “Federated Resource Management in Grid and Cloud Computing Systems,” Future Gener. Comput. Syst., 26(8), pp. 1189–1191. [64] Huber, N., von Quast, M., Hauck, M., and Kounev, S., 2011, “Evaluating and Modeling Virtualization Performance Overhead for Cloud Environments,” International Conference on Cloud Computing and Service Science, Noordwijkerhout, The Netherlands, pp. 563–573. [65] Kousiouris, G., Cucinotta, T., and Varvarigou, T., 2011, “The Effects of Scheduling, Workload Type and Consolidation Scenarios on Virtual Machine Performance and Their Prediction Through Optimized Artificial Neural Networks,” J. Syst. Software, 84(8), pp. 1270–1291. [66] Schad, J., Dittrich, J., and Quiane-Ruiz, J. A., 2010, “Runtime Measurements in the Cloud: Observing, Analyzing, and Reducing Variance,” Proc. VLDB Endowment, 3(1–2), pp. 460–471. [67] Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., and Warfield, A., 2003, “Xen and the Art of Virtualization,” ACM SIGOPS Oper. Syst. Rev., 37(5), pp. 164–177. [68] Mei, Y., Liu, L., Pu, X., Sivathanu, S., and Dong, X., 2013, “Performance Analysis of Network I/O Workloads in Virtualized Data Centers,” IEEE Trans. Serv. Comput., 6(1), pp. 48–63. [69] Guan, H., Ma, R., and Li, J., 2014, “Workload-Aware Credit Scheduler for Improving Network I/O Performance in Virtualization Environment,” IEEE Trans. Cloud Comput., 2(2), pp. 130–142. [70] Shafer, J., 2010, “I/O Virtualization Bottlenecks in Cloud Computing Today,” 2nd Conference on I/O Virtualization, Berkeley, CA. [71] Bourguiba, M., Haddadou, K., El Korbi, I., and Pujolle, G., 2014, “Improving Network I/O Virtualization for Cloud Computing,” IEEE Trans. Parallel Distrib. Syst., 25(3), pp. 673–681. [72] Ranadive, A., Kesavan, M., Gavrilovska, A., and Schwan, K., 2008, “Performance Implications of Virtualizing Multicore Cluster Machines,” 2nd ACM Workshop on System-Level Virtualization for High Performance Computing, Glasgow, Scotland, pp. 1–8. [73] Cherkasova, L., and Gardner, R., 2005, “Measuring CPU Overhead for I/O Processing in the Xen Virtual Machine Monitor,” USENIX Annual Technical Conference, Anaheim, CA. [74] Subashini, S., and Kavitha, V., 2011, “A Survey on Security Issues in Service Delivery Models of Cloud Computing,” J. Network Comput. Appl., 34(1), pp. 1–11. [75] Lori, M., 2009, Data Security in the World of Cloud Computing, Co-published by IEEE Comput. Reliab. Soc., pp. 61–64. [76] Brender, N., and Markov, I., 2013, “Risk Perception and Risk Management in Cloud Computing: Results From a Case Study of Swiss Companies,” Int. J. Inf. Manage., 33(5), pp. 726–733. [77] Subashini, S., and Kavitha, V., 2011, “A Survey on Security Issues in Service Delivery Models of Cloud Computing,” J. Network Comput. Appl., 34(1), pp. 1–11. [78] Archer, J., and Boehm, A., 2009, “Security Guidance for Critical Areas of Focus in Cloud Computing,” Cloud Security Alliance, pp. 1–76.

AUGUST 2015, Vol. 137 / 040901-9

Downloaded From: http://manufacturingscience.asmedigitalcollection.asme.org/ on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use