Network Virtualization: A Hypervisor for the Internet? - IEEE Xplore

2 downloads 65 Views 248KB Size Report
Network virtualization has come into existence as a new ... the facilitation of new service deployment [2]. ..... the network via service access points, the exact.
KHAN LAYOUT

12/16/11

12:20 PM

Page 136

ACCEPTED FROM OPEN CALL

Network Virtualization: A Hypervisor for the Internet? Ashiq Khan, Alf Zugenmaier, Dan Jurca, Wolfgang Kellerer, DOCOMO Communications Laboratories Europe GmbH

ABSTRACT Network virtualization is a relatively new research topic. A number of articles propose that certain benefits can be realized by virtualizing links between network elements as well as adding virtualization on intermediate network elements. In this article we argue that network virtualization may bring nothing new in terms of technical capabilities and theoretical performance, but it provides a way of organizing networks such that it is possible to overcome some of the practical issues in today’s Internet. We strengthen our case by an analogy between the concept of network virtualization as it is currently presented in research, and machine virtualization as proven useful in deployments in recent years. First we make an analogy between the functionality of an operating system and that of a network, and identify similar concepts and elements. Then we emphasize the practical benefits realized by machine virtualization, and we exploit the analogy to derive potential benefits brought by network virtualization. We map the established applications for machine virtualization to network virtualization, thus identifying possible use cases for network virtualization. We also use this analogy to structure the design space for network virtualization.

INTRODUCTION Network virtualization has come into existence as a new technique to enable large-scale testbed deployments (GENI, AKARI, 4WARD, VIRTUOSITY [1]). More recently it has become a research topic on its own, embraced by academic research groups, and also gaining visibility in industry. Its expected impact would materialize in the de-ossification of the current Internet and the facilitation of new service deployment [2]. The current research status involves a number of publications mostly addressing architectural or functionality issues, and ongoing practical projects aiming at testbed validation of the proposed concepts [3]. An overview of the problems that should find a solution through network virtualization, and that were the originators of the research domain, is presented in [4]. This article tries to look at the big picture,

136

0163-6804/12/$25.00 © 2012 IEEE

and presents a potential research framework, which can be used to structure the current research approaches in network virtualization, and gain insight into the potential promising directions. Our methodology is based on an analogy between the operating system and the network, and by mapping the already established use cases and benefits of machine virtualization to the new concept of network virtualization. Our line of thought first maps the functionality/ concepts/properties of an operating system (OS) to those of a network, from the virtualization perspective. Then we identify the most promising use cases for machine virtualization and exploit the analogy to map these to use cases for network virtualization. In addition, by looking at the design space for virtualization on the operating system side, we can also stake out the design space for network virtualization. By this analogy we increase the understanding of network virtualization, by proposing a potential framework in which the different approaches in network virtualization research can be categorized and evaluated based on the current experience in machine virtualization. The same framework would also give a reference system in which we could assess the suitability of different approaches to network virtualization for a given problem. The contribution of this article is twofold: first we advocate the role of network virtualization in solving some practical issues of the current networks, the same way as machine virtualization does in the case of operating systems. Second, from our constructed analogy, we infer the design space for network virtualization, and frame the most promising use cases. The starting point of our analysis is the assumed qualitative equivalence between a process in an operating system and a communication session in a network. From this perspective, the general envisioned architecture for both a machine and a network is the one presented in Fig. 1. Here, the OS/network represent the intermediary between the process and the required hardware resources, and the communication session and the end-to-end path, respectively. Note that in Fig. 1 we clearly separate the logical network part from the physical infrastructure, the same way we separate the OS from the hardware. This clear separation is not necessarily common and might not be self-evi-

IEEE Communications Magazine • January 2012

12:20 PM

Page 137

dent. However, from the virtualization perspective it is a logical step, as the hardware/physical infrastructure lays below the virtualization layer.

AN ANALOGY BETWEEN THE OPERATING SYSTEM AND THE NETWORK THE OPERATING SYSTEM Computer systems usually consist of one or multiple processors, multiple levels of memory (level 1, level 2, cache, RAM), persistent storage (harddisks, flash), and input/output (I/O) interfaces (keyboard, screen, NICs). The complexity and heterogeneity of these systems is undesirable for programmers wishing to write programs that run on a diversity of computer systems. To isolate the hardware complexity from an application programmer point of view, a layer of software is put on top of the hardware, i.e., an Operating System (OS), that offers an easier interface to the application above. The main functionality of the OS is to provide the application/the executing process with a simple set of instructions. An application process accesses the desired hardware resources through the instruction set provided by the OS. The results of the hardware execution are reported back to the application through the OS, hence closing the access loop in which the OS functions as a simplifying intermediary. On a conceptual level, an OS achieves its functionality by providing an abstraction of the hardware capabilities of the underlying hardware and managing the resources of the underlying hardware, such as CPU, memory and I/O. It also allows for multi-programming and timesharing, which enables perceived parallel execution of multiple processes. These concepts are implemented by providing a library of system calls and interrupt routines. The system calls can be grouped into access to hardware and file systems, process management, and inter-process communication. An OS may allow different properties for the processes it manages: they may have different priorities (e.g. nice-value in UNIX), they may have guaranteed response times (e.g. in real time operating systems), or their resource usage may be accounted for (e.g. in mainframes in which CPU usage is paid for).

THE NETWORK As a main functionality, a network provides connectivity between the end points of a session. The network provides a simple method to send information from one endpoint to the other endpoint(s). The information to be transmitted is sent down the network stack to the physical path that this information takes through the network, until it gets delivered via the network stack of the receiving endpoints. The protocol stack hides the network distance between the end-points and abstracts the various heterogeneous transmission media. The end-to-end physical path is represented by a number of transmission lines, be it wired or wireless links, interconnected with the help of hardware devices, e.g., switches and

IEEE Communications Magazine • January 2012

Application level Two session end-points communicating

Application level One process executing

glibc OS functionality Access to hardware

library Process executing

Hardware resources CPU, memory, I/O devices

SAP Network functionality Connectivity

Service primitives

12/16/11

System calls

KHAN LAYOUT

Communication session

Physical infrastructure An end-to-end path (physical links, NICs, switches)

Figure 1. Conceptual analogy between an operating system and network. routers. The end-to-end communication session accesses the physical path, i.e. delivery infrastructure through the network protocol stack which defines the governing rules of this access in the form of addressing protocols, flow/error control mechanism, multiplexing and routing/ switching. The application program that wishes to transmit some data to a remote end-point passes this data to the protocol stack. The protocol stack opens and controls a communication session with the desired destination end-point by the system primitives e.g., listen, connect, receive, send, disconnect. Similarly, the destination end-host uses the system primitives to receive and acknowledge the data. Depending on the desired type of communication, the network provides end-to-end sessions with different characteristics, e.g., connectionoriented or connectionless session, quality of service (QoS) guarantees for the data transfer process, etc.

THE MAPPING BETWEEN THE OPERATING SYSTEM AND THE NETWORK Starting from the assumed equivalence made in the beginning of this article between an executing process and an end-to-end communication session, an analogy can be drawn between an operating system on a single host, and a network which is an information delivery infrastructure. A process is the unit of execution in an OS, which consumes hardware resources, and the execution output is given back to the application that generated the process. In a similar way, the end-to-end communication session consumes the network physical resources in order to enable the data flow between, and offer an output to the communication end-points. From this point of view, the main functionality of an OS, e.g. providing access to hardware resources for an executing process, is similar to the functionality of a network, e.g. to provide network physical resources to an end-to-end communication session. In this framework, child processes created by an OS are similar to the session establishment and maintenance procedures performed by the

137

KHAN LAYOUT

12/16/11

12:20 PM

Page 138

Network functionality Connectivity between the end points

OS functionality Access to hardware

Functionality breakdown: Abstraction of hardware capabilities, Resource management, Multi-programming, Multi-sharing

Functionality implementation: System calls, Inter process communication, File system

Functionality properties: Nice properties, Guaranteed response time, Accounting, ...

Functionality breakdown: Protocol stack addressing, Error control, Flow control, Multiplexing, Routing, Switching, Forwarding

Functionality implementation: Service primitives, Listen, Connect, Receive, Send, Disconnect

Functionality properties: Connectionoriented, Connectionless, QoS accounting ...

Figure 2. Analogy between the OS and network functionality.

network, inter process communication techniques for data transfer and synchronization in an OS solve problems that also occur in multiparty conference calls and synchronization of multiple streams in the network, and the process scheduling (multi-programming and time-sharing) functionality in an OS is similar to the multiplexing operations inside the network. While the previous paragraph attempts an analogy between the OS and the network on a functionality and conceptual level, in terms of execution of this functionality, similarities can be drawn too. System calls generated by the OS and executed through the libc library are comparable to the service primitives offered at the interface between the application and the network through the Service Access Point (SAP). Just like an executing process accesses the hardware resources through the OS system calls, the application end-points access the physical network through the SAP and the service primitives it offers (Fig. 1). Schematically, similarities between an OS and a network, exploited in our analogy, can be represented as in Fig. 2. The analogy is made as a preparation step for our understanding of network virtualization (NV) from the viewpoint of machine virtualization. We hope that the concept, mechanisms and use cases of machine virtualization can be exploited to derive the design space and potential use cases for network virtualization, and helps us create a unified framework in which current research directions in NV can be positioned. The next section addresses this issue.

VIRTUALIZATION CONCEPTS AND BENEFITS OF VIRTUALIZATION In computing, virtualization is a broad concept that refers to the abstraction of computer resources. It was first used in the sixties when the idea of time-sharing appeared. Around that same time, IBM Watson Research Center started a project called the M44/44X Project. The work involved testing this “time sharing’’ concept where virtual machines (44X) were created to image the main machine, the IBM 7044 (M44). Soon after came the virtual machine monitor (VMM) giving the ability to create multiple virtual machines, each instance capable of running its own operating system. Now, over

138

four decades later, we are extrapolating on the fundamental technologies of virtualization, creating new, more efficient ways to manage resources and deliver services. The historical main advantage of virtualization is the implementation of time-sharing mechanisms, which at their turn lead to increased efficiency in using the available physical resources. Furthermore, the concept of isolation is also inherent in the deployment of virtualization, perceived either from the point of view of easy deployment of bundle applications, or as run-time isolation of parallel incompatible applications running on the same physical infrastructure. Virtualization rendered for the first time the operating system independent of the underlying hardware. This feature opened the way for software portability from one hardware entity to another. The resulting isolation, if properly exploited, can reduce system downtime significantly. Malicious or poorly written programs are isolated in such a way that they cannot influence the rest of the system functionality, hence increasing the robustness and protection of the overall system. These concepts spawn a number of benefits exploited so far by service providers and everyday users: Coexistence of Operating Systems on the same machine: A virtual machine installed on a single host machine allows the installing and parallel functioning of two or more separate OSs. In turn, this allows the user of this machine to run different OS specific applications, without requiring yet another dedicated hardware for each OS. Protection: Virtualization limits the communication between processes running on different virtual machines, hence isolating processes from each other. For example, the effect of malware can be limited, without affecting the hardware or other applications running outside the specific virtual environment. Operating System research: Virtualization allows for the developing and debugging of a new instance of an OS on the same hardware. This allows for new research ideas to be implemented and tested. Software testing and runtime debugging: Virtualization offers a stable and convenient way to create a reproducible environment for software testing. This provides an easy way to test and

IEEE Communications Magazine • January 2012

KHAN LAYOUT

12/16/11

12:20 PM

Page 139

debug new software before and during deployment into production environments. Optimization of hardware utilization: Virtualizing hardware equipment makes it possible to accommodate multiple processing/computation environments, hence it is possible to aggregate multiple servers on the same machine, up to capacity, and reduce the hardware number and costs (e.g. cloud computing). Job migration: Executing processes can be moved transparently from one hardware system to another. In case of system failure, machine virtualization enables the transparent migration of running processes to a different machine, which insures service availability. Job migration can also be used for load balancing and energy saving by moving jobs from a lightly loaded machine and subsequently powering down the hardware. Virtual storage: Storage becomes independent of the services it is used for, and the management functionality becomes easier. Addressing of individual storage devices, e.g. hard-drives, becomes transparent to the service, its administrator, as well as the user. Easy deployment of bundle applications: Deployment of specialized applications is not straightforward, as they require specific versions of operating system libraries and thus may not coexist well on the same machine. Virtual machines can alleviate this problem because incompatible applications can be put into separate virtual machines. Full ISO images containing the correct OS version together with the necessary applications can easily be deployed in one virtual machine, without changing the current configuration of the machine (and e.g. avoiding the ‘’DLL Hell’’ problem). Separation of code license agreements: The isolation offered by a virtual machine enables proprietary code to be cleanly separated from GPL code and its viral license requirements. This facilitates the production of fully GPL-compliant products.

USE CASES FOR NETWORK VIRTUALIZATION We assume a general architecture for network virtualization as the one presented in Fig. 3 where the network infrastructure resources are gathered under the virtualization layer, and become transparent to the virtual network operators. Network virtualization is a method of combining the available resources in a network by splitting up the available bandwidth into channels, each of which is independent from the others, and each of which can be assigned (or reassigned) to a particular server or device in real time. The idea is that virtualization disguises the true complexity of the network by separating it into manageable parts. We consider access to the network via service access points, the exact location of which is irrelevant for our discussion, be it in the core network or a socket-like interface on a host. Starting from the benefits and concepts for virtualization as presented earlier, we define use cases for network virtualization. We replace machine virtualization by network virtualization and deduce example use cases from there: Coexistence of networks: Virtualization tech-

IEEE Communications Magazine • January 2012

Virtual network user

....

Virtual network user

Service provider

Virtual network operator

Virtual network provider

Infrastructure provider

....

Infrastructure provider

Figure 3. Network virtualization general architecture. nologies, when applied to networking, can create multiple virtual/logical networks on the same physical substrate network. It is also possible to maintain isolation in between these networks. This isolation would facilitate the deployment of multiple, sometimes conflicting networking techniques in the same physical network, e.g. different, incompatible routing protocols. It could also enable accommodating multiple generations of cellular networks, e.g. 4G and 5G on the same physical network. Protection: Isolation is a great weapon for protecting an entity from its surroundings. Different virtual networks may have different protection features, such as access control or mandatory traffic encryption. One such isolated, encrypted, access controlled network can be used, for example, for banking applications. Adding access control also on the servers joining such a network can be used to create a safer Internet for children. At present, the most prominent way of realizing secured and thus protected sessions is by means of virtual private networks (VPN). Regardless to whether PPTP, L2TP or IPSec is used, the basic principle is tunneling. User data, including or excluding the IP header, is encrypted, and additional headers, e.g. PPP, GRE, IP etc., are appended with the encrypted user packets for authentication and routing purpose. The content of the user packet, including source and destination IP, remain transparent to the core transport network. Although VPN protects the secrecy of user data, it is a point-to-point protection rather than a multipoint-to-multipoint protection, i.e. network-wide isolation that is envisioned in network virtualization. Network research: The concept of network virtualization emerges from the need for a largescale general purpose testbed for next generation network research. The isolation guarantees that the malfunctioning of one experiment cannot take the whole physical testbed down and thus, other experiments running in parallel stay unharmed. NSF GENI is pursuing such a testbed. Network testing and debugging: Operators, owning networks with a nation-wide reach, lack

139

KHAN LAYOUT

12/16/11

12:20 PM

The hypervisor may either run standalone like a microkernel or supported by a host operating system. However, most commercially available hypervisors make use of a host operating system that allows easy management and installation of guest operating systems.

140

Page 140

large-scale testing facilities where they can sufficiently test their new services before commercially deploying them on their servicing network. By using virtualization technologies, it is possible to create slices in operators’ servicing nodes and create a testing facility as large as the commercial network. In such a testing environment, commercial operators can test their under-development services and have a better opportunity to make the service robust enough before commercial deployment. Isolating this virtual network for testing from the commercial networks guarantees resiliency to the commercial slices in case of malfunctions in the test slices, thus ensuring service availability to the users. If the network nodes can be virtualized such that the complete network can be simulated on a smaller amount of hardware than the number of network nodes would stipulate, significant cost savings could be realized for testbeds. The same technology can be used to experiment with deployed applications in order to pinpoint hardto-trace runtime problems. A proof-of-concept of this can be found in [6]. Optimization of hardware utilization: By dynamically allocating virtual network nodes to the physical substrate, the existing hardware can be utilized for multiple virtual networks up to capacity and thus minimize cost for the infrastructure provider. The reason network virtualization might be needed on network nodes is if different administrative domains want to administer their own policies on these nodes. The problem of resource mapping between the physical substrate and the virtual networks is currently one of the most actively pursued problems in academic research [7]. Job migration: In the case of network virtualization, slices take the role of jobs. A slice is a virtual machine inside a network node. Slices can migrate from one network node to another. This migration provides an easy way of re-configuring the network topology without deploying much re-wiring. Slice migration can create a completely different network topology on the fly, which can see its use in topology optimization in commercial networks or for creating topology for experimentation in academia. Such migration techniques can also be used to reduce downtime during network upgrade [8]. Virtual storage: Current network abstractions do not include storage of permanent state as a main feature of the network. However, if we change the boundary of the abstraction to include content delivery as a part of the core network service, e.g. CDNs and p2p networks, virtualization can help retrieve content from the same logical address, regardless of its physical location. In fact, virtualization can help adapt the physical location of content according to its access pattern, e.g. flash crowds and local news. Easy deployment of bundled network services: The equivalent of an application in networks is a service. Many network services require a specific network architecture (e.g. WLAN, sensor networks, corporate intranet). If many of such services are deployed simultaneously in a network, the resulting network architecture supporting all deployed services can become overly complex. This problem could be alleviated by

deploying services that don’t rely on each other in separate virtual networks. Separation of different agreements and licenses: Network virtualization not only facilitates separation of different classes of quality of service agreements, but would also allow different traffic routes due to differing peering agreements. For example, one could ensure that certain traffic does not get routed through networks the data shouldn’t stray into. The traffic in question could be data sensitive for national security, or media streams for which the distributor has rights only in certain countries. In this regard, the authors in [10] have shown servicebased network virtualization by using multi-layer traffic engineering in GMPLS. The expected improvements brought by network virtualization lie in the domain of increased flexibility and level of service, increased separation between network and service providers [9], or reduced management cost and energy consumption [8]. On the other hand, potential disadvantages of network virtualization technologies include the potential reduction of the statistical multiplexing advantage, the extra complexity of the inter-slice management operations, and the cumbersome interaction between control and management entities functioning inside a network slice with those functioning at the hypervisor level.

CHARACTERISTICS AND LEVELS OF VIRTUALIZATION The scenarios discussed above relate to hypervisors. The hypervisor exposes an interface that represents hardware and can be used to host full-fledged operating systems. On the other hand, there are virtual machines that run on top of operating systems, and are execution environments for individual programs written in bytecode (e.g. Java virtual machines). Finally, there is a type of hypervisor that implements the concept of paravirtualization. Paravirtualization exposes a hardware-like interface, but requires the guest OS to be modified in order to explicitly call functions inside the hypervisor. This is done to achieve better performance than hypervisors that try to truthfully mimic the hardware. The hypervisor may either run stand-alone like a micro-kernel or supported by a host operating system. However, most commercially available hypervisors make use of a host operating system that allows easy management and installation of guest operating systems. To each guest OS, hypervisors will assign their own resources, thus isolating them from each other. Therefore, processes running in different guest OSs cannot communicate with each other using the usual OS primitives of semaphores, signals, pipes, or shared memory. In general, these processes will have to rely on network services to communicate with each other. Programs running in virtual machines representing a byte-code execution environment have a wider variety of communication mechanisms to choose from, as such features are usually an integral part of the execution environment (e.g. .NET remoting) Below, we identify the three main types of

IEEE Communications Magazine • January 2012

KHAN LAYOUT

12/16/11

12:20 PM

Page 141

machine virtualization and their different implementations and usages. We will try to map presently available network virtualization concepts and implementations to these three machine virtualization concepts as well as implementations. Different network virtualization concepts have been summarized well in [11]. We will be aggressive in our analogy and sometimes forcefully map a virtual network implementation to one of the three machine virtualization concepts. The mapping will primarily be based on how OS processes and network sessions access the machine hardware and network infrastructure, respectively — if the OS process or the network session needs to be modified in order to access the physical resources underneath. For the network perspective, we will consider a network as an information delivery infrastructure and concentrate on how a session is accommodated within it. Full Virtualization — In this case, a guest OS needs no modification and can be installed above a hypervisor (e.g. WMWare ESX). Multiple guest OSs, along with the respective applications on top, run in multiple virtual machines, which are fully isolated. The hypervisor provides hardware resources to each guest OS. The resources are sometimes fully dedicated, when fine-grained hardware support is available. We’ll map Internet in a slice (e.g. the GENI project, [2]), although still in an evolution phase, to such a full machine virtualization concept. As a design principle, the creators of GENI plan to incur a zero overhead to the user of a particular slice, so that the user can run his software/ applications inside the slice without any modifications, e.g. a new network protocol stack, services (Fig. 4). However, as things evolve in many directions in real-world implementations, it is not unlikely that users will have to modify their code in order to use a slice. Such a case gets closer to the concept of paravirtualization, as explained below. L3-VPN, when implemented as a Provider Edge (PE) solution [11], also falls into this category. Here, the end system, i.e. customer system, needs no modifications of its protocols. PE routers acting like hypervisors create and maintain the VPN connections and ensure the isolation of the VPN tunnels inside the network. MPLS and other forms of transport network tunneling fall into this category as well, since the end user/application is completely unaware of such a service from the underlying network. Paravirtualization — This solution requires the guest OS to be modified for the hypervisor. We put L3-VPN customer edge (CE) solutions [11] in this category, as the customer system has to establish, maintain, and isolate its session. Hence, the user must adapt the VPN tunnels to the underlying provider network in between two CE routers. L2-VPN is also similar, as the L2switch at the session generating host side has to modify the session, i.e. tagging to receive the VPN service from the L2 network (Fig. 5). We also consider overlay networks here. The application layer (layer 7) needs special addressing and flow control to receive desired services from

IEEE Communications Magazine • January 2012

Management network

Virtual network

Virtual network

Management applications

Application

Application

Layer 2 technology

Layer 2 technology

Layer 2 technology

Network virtualization (Internet in a slice)

Physical infrastructure

Figure 4. Full network virtualization: Internet in a slice.

the transport network, e.g. transparency, overlay maintenance, and other services the transport doesn’t provide in general. The Open Signaling Approach [11] falls in this category as well, as the user has to adapt its applications to the API provided by the underlying abstraction layer. Execution Environments for Virtual Machines Over the Host OS — Java Virtual Machine and Microsoft Common Language Runtime are two examples of such virtualization techniques. The virtual machine execution environment offers a hardware independent instruction set that is responsible for translating Java byte code or Intermediate Language Code to machine language. We argue that a network, historically and inherently, is a virtualized environment of this category. All the application sessions are in general isolated, and receive different treatment according to the QoS requirements (best effort excluded) mentioned by the originator of the session, i.e. end system. Avoiding here the philosophical argument of “the waist of the hour-glass in the network protocol stack is not IP but HTTP,” the Virtual Machine Execution Environment in this scenario is analogous to the waist of the hour-glass in machine virtualization. It adapts the application to the hardware in a machine, whereas any application session in a network can use IP and needs not be aware of the underlying transport technology. The network has always been a virtualized multi-user use space. Cellular networks even isolate sessions on a per user basis (Fig. 6).

LIMITATIONS OF OUR ANALOGY Like with any analogy, it is easy to get carried away and forget that this analogy is done only for one particular purpose, which is to structure existing work on network virtualization and explore potential use cases and implementation architectures. Similarly, this analogy should only be construed to represent a conceptual mapping within its scope.

141

KHAN LAYOUT

12/16/11

12:20 PM

Page 142

Virtual network

Virtual network

Application

Application

Layer 3 technology

Layer 3 technology

Tunneling (L3-VPN CE, L2-VPN)

Lower layer technology

Physical infrastructure

Figure 5. Network paravirtualization: Layer 2 and 3 tunneling technologies.

There are a number of issues in which the analogy does not hold up. The most important ones in our opinion are listed below. •The concept of geography gives an additional constraint in networking that does not have an equivalence in an operating system. •Virtual memory concepts on the operating system side do not fully map, as the prevalent network abstraction does not consider saving permanent state as a salient network feature. However, since storage becomes important in some architectures envisioned for future networks, the benefits brought by virtualization should be revisited. •Back-up concepts from the operating system do not map in their common sense. In the network, memory is associated to end points, hence the back-up functionality is taken outside the network. Network back-up would imply saving configuration and management state of the network at a given time. •Resources in an operating system used by one process usually belong to one administrative domain, while resources used by one session in networks usually belong to multiple administrative domains. •I/O concepts from the operating system have no equivalent in networking, as sessions only establish communication between the end points. Even though it is not possible to deduce architectural recommendations from this analogy, our approach still provides a way to structure the design space for network virtualization via comparison, as we have seen earlier. It also provides a way to map the virtualization concepts/benefits from one domain to the other, and structure the possible use cases for network

142

virtualization accordingly. At the same time, some of the limitations of this analogy give rise to new research questions.

OPEN ISSUES AND RESEARCH DIRECTIONS The places in which our analogy breaks down can be exploited to define new research directions. Whenever a mapping fails because an aspect exists in networks or OSs only, we could possibly identify a fundamental issue. The most striking difference between networks and OSs is the importance of geographic location. A concrete research issue directly influenced by this difference relates to the energyconsumption optimization of the discussed system. The geographic location of network nodes must be taken into account when trying to optimize power consumption, in order to satisfy possible network QoS/coverage guarantees. Network topology and user geographical positioning are core issues in network virtualization, while their OS counterparts (data-center architectures) only play a major role in the case of the biggest of data-centers. A second issue we identify relates to the benefits of storage virtualization in future network architectures. While the concept of storage is not fully defined in present networks, it is a fundamental building block of new network paradigms and algorithms, e.g. CDNs, p2p networks, DTNs, or energy-saving buffer-and-burst networks. Virtualizing storage could improve the performance of such systems, by making it independent of the address space and physical location. Once content is brought into the network, back-up becomes more important, as simply bootstrapping individual network elements from configuration files may not restore the full network functionality. Network virtualization could simplify network back-up by requiring distributed check-pointing only for one virtual network, which might make the problem more tractable. Finally, another key difference between the network and the OS is that, even though OSs are multiuser, they are still under the same administrative domain. On the contrary, networks usually span multiple administrative domains. OS virtualization potentially deals with multiple administrative domains on top of the virtualization layer, e.g. hosting for Amazon’s EC2, while network virtualization would have to deal with different administrative domains also below the virtualization layer. How multiple virtualization techniques can coexist in different administrative domains of the infrastructure providers, is a question future research will have to answer in order to realize the benefits of end-toend virtualization.

CONCLUSIONS In this article we offer a specific way of addressing the network virtualization research domain, starting from a conceptual analogy between an operating system and a network. We derived a number of use cases for network virtualization

IEEE Communications Magazine • January 2012

KHAN LAYOUT

12/16/11

12:20 PM

Page 143

from use cases for virtual machine monitors. Some of these use cases are already known in the research literature, some novel to our knowledge, leading to interesting research directions. At the same time, points in which our analogy breaks down offer as many promising leads to be investigated in the future. Within this analogy, we argue that with regard to technology to implement network virtualization, there are horses for courses. There may not be one single network virtualization technology superior to all others. Similar to virtual machines executing below or above an operating system, virtualization may take place within the network elements or only outside. Thus, equating network virtualization to a hypervisor for the Internet falls short of describing the implementation options available to realize the desired benefits. Our analogy can be used as a predictive model for network virtualization use cases. As an outlook, we take the liberty to predict, in analogy to OS virtualization, that the first network virtualization use case in widespread introduction would be related to increased flexibility of network management and deployment.

User session

User session

Application

Application

Layer 3 technology

Layer 3 technology

GTP tunneling

UDP/IP

Layer 2 technology

REFERENCES [1] D. Villela, A. T. Campbell, and J. Vicente, ”Virtuosity: Programmable Resource Management for Spawning Networks,” Computer Networks, special issue on Active Networks, vol. 36, no. 1, 2001, pp. 49–73. [2] N. Feamster, L. Gao, and J. Rexford, ”How to Lease the Internet in Your Spare Time,” ACM SIGCOMM Comp. Commun. Review, vol. 37, no. 1, Jan. 2007, pp. 61–64. [3] M. Handley, O. Hodson, and E. Kohler, ”XORP: An Open Platform for Network Research,” ACM SIGCOMM Hot Topics in Networking, 2002. [4] T. Anderson et al., ”Overcoming the Internet Impasse Through Virtualization,” IEEE Computer, Apr., 2005, pp. 34–41. [5] S. Rao, S. Malhotra, and S. Khavtasi, ”Application of Virtualization Concepts to Collaborative ISP Enterprise,” VISP Project Deliverables, 2007. [6] A. Wundsam et al., ”Network Troubleshooting with Mirror VNets,” Wksp. Network Future(FutureNet-III), Globecom 2010, IEEE, New York, Dec. 2010. [7] A. Leon-Garcia and L. G. Mason, ”Virtual Network Resource Management for Next-Generation Networks,” IEEE Commun. Mag., vol. 41, no. 7, July 2003, pp. 102–09. [8] Y. Wang et al., ”Virtual Routers on the Move: Live Router Migration as a Network- Management Primitive,” ACM SIGCOMM, Seattle, USA, Aug. 2008. [9] S. Rao, S. Malhotra, and S. Khavtasi, ”Application of Virtualization Concepts to Collaborative ISP Enterprise,” VISP Project Deliverables, 2007. [10] Y. Ohsita et al., ”Gradually Reconfiguring Virtual Network Topologies based on Estimated Traffic Matrices,” IEEE Int’l. Conf. Computer Commun. (INFOCOM 2007), Anchorage, AK, May 2007. [11] N. M. Mosharaf Kabir Chowdhury and R. Boutaba, ”Network Virtualization: State of the Art and Research Challenges,” IEEE Commun. Mag., vol. 47, no. 7, July 2009, pp. 20–26.

BIOGRAPHIES ASHIQ KHAN ([email protected]) received his M.S. degree from Tohoku University, Sendai, Japan in 2004. He joined the Networking Research Laboratories of NTT DOCOMO, INC. Tokyo, Japan as a research engineer in 2004 and worked on autonomous mobile networks, QoS-based routing algorithms, future service delivery platform and next generation network architecture. He is a senior researcher now at DOCOMO Communications Laboratories Europe GmbH, Munich, Germany. His research interests include future mobile network architecture, network virtualization

IEEE Communications Magazine • January 2012

Physical infrastructure

Figure 6. User session isolation in cellular mobile networks. and control and management of 4G and beyond cellular networks. A LF Z UGENMAIER ([email protected]) is a professor for mobile networks and security at Munich University of Applied Sciences. Prior to joining the university he worked at DOCOMO EuroLabs in Munich, and at Microsoft Research, Cambridge, UK. He holds a Ph.D. in computer science and a Diplom in Physics, both from University of Freiburg, Germany. His research interests include networks, systems security and privacy. DAN JURCA ([email protected]) received his M.S. degree from the “Politehnica” University of Timsoara, Romania in 2002. In 2007 he received his Ph.D. degree in electrical engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland, with his main topic of research focused on real-time adaptive multimedia streaming. Between 2007 and 2008 he worked as a post doctoral researcher in the Royal Institute of Technology (KTH), Stockholm, Sweden, on distributed network management aspects. He joined DOCOMO Communications Laboratories Europe GmbH, Munich, Germany in 2008 and worked on reconfigurable mobile networks, multimedia streaming and Quality-of-Experience (QoE) applications. He is currently working for Huawei Technologies Duesseldorf GmbH in the area of QoS/QoE for mobile broadband networks. W OLFGANG K ELLERER ([email protected]) is Director and Head of the Network Research department at NTT DOCOMO’s European research laboratories in Munich, Germany. His research interests include mobile networking, QoE-based resource management, service platforms, and overlay networks. Before he joined DOCOMO Euro-Labs, he has been a research staff member at Munich University of Technology (TUM). In 2001 he was a visiting researcher at the Information Systems Laboratory of Stanford University, California, US. He holds an M.Sc. and a Dr. degree in electrical engineering and information technology from TUM, Germany.

143