QoS-awareGNetwork Operating System for Software ... - IEEE Xplore

4 downloads 2416 Views 683KB Size Report
Email: { kals25, jwkim}@ynu.ac.kr, [email protected]. Abstract— OpenFlow switching and Network Operating System. (NOX) have been proposed to support new ...
QoS-awareGNetwork Operating System for Software Defined Networking with Generalized OpenFlows Kwangtae Jeong, Jinwook Kim and Young-Tak Kim Dept. of Information and Communication Engineering, Graduate School, Yeungnam University, Korea Email: { kals25, jwkim}@ynu.ac.kr, [email protected] Abstract— OpenFlow switching and Network Operating System (NOX) have been proposed to support new conceptual networking trials for fine-grained control and visibility. The OpenFlow is expected to provide multi-layer networking with switching capability of Ethernet, MPLS, and IP routing. NOX provides logically centralized access to high-level network abstraction and exerts control over the network by installing flow entries in OpenFlow compatible switches. The NOX, however, is missing the necessary functions for QoS-guaranteed software defined networking (SDN) service provisioning on carrier grade provider Internet, such as QoS-aware virtual network embedding, end-to-end network QoS assessment, and collaborations among control elements in other domain network. In this paper, we propose a QoS-aware Network Operating System (QNOX) for SDN with Generalized OpenFlows. The functional modules and operations of QNOX for QoS-aware SDN service provisioning with the major components (e.g., service element (SE), control element (CE), management element (ME), and cognitive knowledge element (CKE)) are explained in detail. The current status of prototype implementation and performances are explained. The scalability of the QNOX is also analyzed to confirm that the proposed framework can be applied for carrier grade large scale provider Internet1. Keywords – Network Operating System, OpenFlow, GMPLS, Traffic Engineering, QoS

I. INTRODUCTION Network operating system (NOX) enables management applications to be written as centralized programs over highlevel abstractions of network resources as opposed to the distributed algorithms over low-level addresses [1, 2]. The network operating system does not manage the network itself; it provides a programming interface with high-level abstractions of network resources (e.g., CPU processing power, memory, disk storage volume, link capacity, etc.) that enable network application programs to carryout complicated tasks safely and efficiently on a wide heterogeneity of networking technologies [1]. The NOX, however, fails in providing the necessary functions for QoS-guaranteed software defined networking (SDN) [3] service provisioning on carrier grade provider Internet, such as QoS-aware virtual network embedding, end-to-end network QoS assessment, and collaborations among control elements in other domain network. The SDN is a networking paradigm which allows network operators to manage networking elements using software 1 This research work was supported by Yeungnam University research grants in 2011 and HiMang Project (No 100035594) funded by MKE, Korea.

c 978-1-4673-0269-2/12/$31.00 2012 IEEE

running on an external server [3]. This is accomplished by a split in the networking architecture between forwarding element and control element. Separation of control software from numerous packet forwarding nodes into a few centralized controllers has been proposed to increase flexibility in the deployments of new services (e.g., overlay networking, virtual private network, cloud computing, and content distributions), programmability with standardized open API, and reliability in the converged IP network [3-6]. The installation of control software in a few controller nodes remotely from the forwarding elements reduces the software complexity of numerous forwarding elements, and increases the overall reliability of the network [4]. Two technologies which allow the split of control element from forwarding element are ForCES (Forwarding and Control Element Separation) [7, 8] and OpenFlow [3]. The ForCES working group in IETF is providing the framework and associated standardized protocol for information exchange between the control element and the forwarding element for open programmable networks [7-9]. Recently, the separation of control plane from the data forwarding plane has been also used in OpenFlow that is supporting flow-based forwarding and fine-grained per-flow controls, and it provides a new flexibility in the research work for future Internet architectures [10-12]. Currently, OpenFlow features have been added on experimental basis to the HP ProCurve 5400 and Cisco Catalyst 6500 series switches [12]. OpenFlow switch implementation on the NetFPGA can provide IPv4 packet processing for four 1 GgE Ethernet ports at 1 Gbps line-rate [12]. In order to be a practically applicable solution for a carrier grade large-scale Internet, however, the networking with separation of control and forwarding (including OpenFlow architecture) should be able to fulfill following requirements: scalability, reliability, quality of service (QoS) and service management [3]. The scalability for carrier grade networking should include following aspects: i) scalability of the centralized control element that must handle routing and path calculation for the forwarding elements, ii) scalability of the flow table in the forwarding element (especially the transit core router), and iii) scalability of fault restoration at link or node failure in forwarding elements. In this paper, we propose a QoS-aware Network Operating System (QNOX) for Software Defined Networking with Generalized OpenFlows. The functional modules and operations for QoS-aware SDN service provisioning with the major components (e.g., service element (SE), control element

1167

(CE), management element (ME), and cognitive knowledge element (CKE)) are explained in detail. The current status of prototype implementation and performances are explained. The scalability of the QNOX is also analyzed to confirm that the proposed framework can be applied for carrier grade large scale provider Internet. The remainder of this paper is organized as follows. Section II briefly introduces related work on SDN, separation of control plane and forwarding plane, and OpenFlow. In Section III, we propose a QoS-aware Network Operating System (QNOX) for Software Defined Networking with Generalized OpenFlows. The functional modules and operations for QoSaware SDN service provisioning with the major components are explained in detail. Section IV explains the current status of implementations, and Section V provides performance analysis. Finally, Section VI concludes this paper. II.

RELATED WORK

A. Software Defined Networking (SDN) and Network Operating System (NOX) The SDN allows network operators to manage networking elements using software running on an external server [3]. SDN provides abstraction at three areas of transport networks (distributed state, forwarding, and configuration) which are key to extract simplicity. This is accomplished by a split in the networking architecture between forwarding element (FE) and control element (CE). Separation of control software from numerous packet forwarding nodes into a few centralized controllers has been proposed to increase the flexibility in the deployments of new services (e.g., overlay networking, virtual private network, cloud computing, and content distributions), the programmability with standardized open API, and the reliability in the converged IP network [3-6]. Since the installation of centralized control software in a few controller nodes remotely from the distributed forwarding elements reduces the software complexity of numerous forwarding elements, and it also increases the overall reliability of the network [4]. The SDN makes the introduction of new vendor operating systems much easier. It allows users to create plug-ins for adding features to the control plane without having to change the underlying hardware, or to enhance the hardware without changing the control plane. The network operating system (NOX)does not manage the network itself; it provides a programming interface with highlevel abstractions of network resources (e.g., CPU processing power, memory, disk storage volume, link capacity, etc.) that enable network application programs to carryout complicated tasks safely and efficiently on a wide heterogeneity of networking technologies. The NOX, however, fails in providing the necessary functions for QoS-guaranteed software defined networking (SDN) [3] service provisioning on carrier grade provider Internet, such as QoS-aware virtual network embedding, end-to-end network QoS assessment, and collaborations among control elements in other domain network. B. Separation of Control Plane and Forwarding Plane The most important feature of future Internet routers is supporting open programmable networking that provides i)

1168

flexibility in new services provisioning, ii) open and modular interfaces of control plane, and iii) controllability and programmability of the network with high-level abstractions [4]. In order to be open programmable, the control plane and forwarding plane of current Internet routers should be systematically separated by standardized interfaces. The control plane functions include i) routing for reliable advertisement of the network topology and available resources (e.g., OSPF and BGP for IP networking), ii) path computation, iii) signaling (e.g., RSVP-TE for MPLS/GMPLS networking), and iv) traffic engineering database (TEDB). The complexity of control plane has been continuously increased in order to support new features, such as virtual private network (VPN), cloud computing, contents distribution networking, fast fault tolerance, and mobility. Separation of control plane (e.g., OSPF of IP router) from forwarding elements (such as current IP router with packet classification and forwarding) has been proposed and analyzed in recent research work [1, 2]. The IETF ForCES (forwarding and control element separation) working group is providing the framework and associated standardized protocol for information exchange between the control element and the forwarding element for open programmable networks [2, 3]. C. OpenFlow The OpenFlow network consists of OpenFlow compliant switches and OpenFlow controllers with unmodified end-hosts [10-12]. Essentially, the OpenFlow separates the datapath over which packets flow, from the control path that manages the datapath elements. OpenFlow has been deployed at various academic institutions and research laboratories, and recently several research work provide analysis on the performance of OpenFlow framework as a viable solution for highperformance commercial networks, such as data center networks [2]. The control plane is responsible for the initial establishment of every fine-grained flow by configuring the related switches along the path in the domain. As a result, the fine-grained perflow control and global visibility in OpenFlow framework imposes an excessive overhead on the forwarding element [16]. In order to mitigate the scalability problem of fine-grained perflow control, an efficient hierarchical flow aggregation and traffic engineering with multi-layer networking [17] should be implemented. Details of the multi-layered networking with GMPLS are explained in following subsection. D.

Multi-layered Networking with GMPLS The future Internet forwarding elements will include various packet or frame switching capabilities for QoS-aware service provisioning and efficient traffic engineering with fast fault restoration. Multi-layered networking [17] defines generic data plane with GMPLS (generalized multi-protocol label switching). GMPLS includes switching capabilities of following layers: layer 3 packet switching/forwarding, layer 2.5 MPLS switching, layer 2 Ethernet switching, layer 1.5 SONET/SDH cross connect, layer 1 WDM lambda switching, and layer 0 fiber switching [17]. The Generalized OpenFlow defined in this paper specifies that the forwarding elements are providing the GMPLS switching capabilities in distributed manner. Especially, the

2012 IEEE/IFIP 4th Workshop on Management of the Future Internet (ManFI)

MPLS-TP is used for QoS-aware traffic engineering, while WDM lambda switching is used for long distance wide area networking. The Generalized OpenFlow is not requiring endto-end homogeneous signaling, but supports various connection establishments in each domain network. The overall connectivity is controlled by PCE (path computation element) in the control element [18-22]. III.

QOS-AWARE NETWORK OPERATING SYSYTEM FOR SDN WITH GENERALIZED OPENFLOW

A. Generalized OpenFlow Networking The OpenFlow version 1.0 is providing fine-grained perflow controls based on 10-tuples (Ethernet switch input port, source/destination MAC address, Ethernet type, VLAN ID, IP source/destination address, IP protocol, TCP/UDP source/destination port). In the recent OpenFlow specification, MPLS has been included; but GMPLS networking technologies based on WDM optical lambda switching are not included. Since efficient and scalable traffic engineering is one of the most important factors in the future Internet, multi-layer networking with optical transport network (i.e., ASON (Automatically Switched Optical Network) /WDM (Wavelength Division Multiplexing)) is essential. In this paper, we propose a Generalized OpenFlow that includes IP, MPLS-TP and WDM/ASON. MPLS-TP layer is mostly used for traffic engineering for QoS-guaranteed differentiated service provisioning, while WDM/ASON layer is used for long distance transit networking. Differentiated MPLS-TP TE-LSPs (traffic engineering label switched paths) are pre-planned to configure virtual overlay networks for each DiffServ class type (i.e., NCT (network control traffic), EF (expedited forwarding), AF (assured forwarding) 4, AF3, AF2, and AF1). Multiple QoS-guaranteed IP packet flows are aggregated into a designated MPLS-TP for required QoS-

aware SDN services. Multiple MPLS-TPs are routed along the optical lambda path in WDM/ASON for long distance transit networking. When fast fault restorations (e.g., less than 50ms of protection switching time) is required, working TE-LSP and SRLG (shared risk link group)-disjoint backup TE-LSP are installed together with constraint-based shortest path first (CSPF) multi-domain routing. Generalized OpenFlow switch provides actions for MPLSTP and WDM/Optical networking in GMPLS networking concept. It provides label push, label pop and label swap operations at the ingress LER (label edge router), egress LER, and intermediate transit LSR (label switch router), respectively. B. QoS-aware Network Operating System (QNOX) for SDN Service Provisioning with Generalized OpenFlows Figure 1 depicts the overall framework of the QoS-aware Network Operating System (QNOX) for SDN service provisioning with Generalized OpenFlows. The major functional components of SDN/GOFN are service element (SE), control element (CE), management element (ME), and cognitive knowledge element (CKE). The forwarding element (FE) is Generalized OpenFlow (GOF) compatible switching node. The transport domain networks (i.e., FEs) may be composed of i) Generalized OpenFlow compatible networks controlled by remote controllers, and ii) legacy IP/MPLS/WDM transport network which are controlled and managed by their own subnetwork-dependent control elements (e.g., OSPF & BGP, RSVP-TE) and management elements (e.g., SNMP/IP, TMN for SONET/SDH). In addition to the usual switching node and links, the Generalized OpenFlow Network (GOFN) also includes cloud computing servers, data storage servers, and multicasting switching, and other server node with special feature for content delivery network (CDN) services.

Figure 1. QoS-aware Network Operating System (QNOX) for SDN Serviec Provisioning with Generalized OpenFlows

2012 IEEE/IFIP 4th Workshop on Management of the Future Internet (ManFI)

1169

In the proposed QNOX for SDN/GOFN framework, clients request services using QoS-aware Open Virtual Network Programming Interface (QOVNPI). The service element (SE) receives service requests with attributes of the required computing and storage capacity, location of the users’ access points, required performance and QoS parameters, required fault restoration performance, and security level. The SLA (service level agreement) and SLS (service level specification) module checks and evaluates the availability of the network resources, and determines the guaranteed-QoS service provisioning. If the requested QoS level is not available, there may be some negotiations on the QoS level and performance parameters among SE and the user. Currently, the QOVNPI is under developments, based on XML and JSON (JavaScript Object Notation). The SE also includes the service life-cycle management for the accepted services, and QoE/QoS monitoring functional module. The control element (CE) handles the end-to-end session control with path establishments on each transport networks for connection-oriented services, and flow table updates using CEFE interactions along the route of the generalized OpenFlow. The CE includes SIP (session initiation protocol)/SDP (session description protocol) module for end-to-end QoS provisioning of realtime multimedia conversational services, such as VoIP and multimedia conference call. In order to provide flexibility of interworking between GOFN and legacy networks (such as IP/SONET, IP/WiFi, and IP/GMPLS), the control plane is composed of transport network-independent control (including SIP/SDP, path computation element (PCE), and policy-based routing) and transport network-dependent control (including BGP, OSPF-ISIS BGP/IP, and RSVP-TE/MPLS). The Generalized OpenFlow control provides controller functions for Generalized OpenFlow compatible switches, such as creation or deletion of a GOF table entry. Since one subnetwork-dependent control node is allocated for each transport network domain, the subnetwork-independent control node (which handles end-to-end session connectivity) may interact with multiple subnetwork-dependent control element and/or Generalized OpenFlow control element. The management element (ME) performs network resource discovery, multi-layer/multi-domain QoS-aware virtual overlay networking, and virtual network topology managements. The ME is also composed of subnetwork-independent management function, subnetwork-dependent management function (such as SNMP for IP network, LMP/OAM for ASON, and TMN for SONET/SDH transmission network), and Generalized OpenFlow management function for GOF-compatible GMPLS switch nodes. The ME also provides network QoS performance monitoring. The cognitive knowledge element (CKE) maintains link state database (LSDB) and traffic engineering database (TEDB) of transport network topology, and provides the decision making of mapping a virtual network topology for the SDN user’s requested virtual topology onto the physical transport network topology. The CKE also supports traffic engineering for QoS-guaranteed service provisioning and network load balancing. C. ForCES Protocol for CE-FE In the proposed QNOX for SDN/GOFN, the CE and the FE are interconnected by ForCES protocol [15], as shown in

1170

Figure 2, which provides much more features than OpenFlow SSL (secure sockets layer) secure channel. The ForCES interface is composed of two parts: the protocol layer (PL) and the transport mapping layer (TML). The PL is in fact the ForCES protocol that defines all the semantics and the message formats, while the TML is used to connect two ForCES PL entities on the CE and the FE, respectively. We implemented SCTP (stream control transport protocol)based TML that provides 3 priority levels, according to RFC 5811 [16]. The high priority TML channel is used for association setup, association setup response, association teardown, configuration, configuration response, query, query response, and flow table entry update. The medium priority channel is used for event notification, while the low priority channel is used for packet redirect and heartbeat. Since one CE controls multiple FEs in the domain network, the CE that is connected with FEs using ForCES may become bottleneck in calculation of updated routes and downloading FIB table entries to each FE. The performance of CE-FE message exchanges using ForCES are analyzed in Section V. Control Element (CE)

Cognitive Knowledge Element (CKE) Knowledge-based Decision making on service & network mapping Network Load Balancing

TE DB Link State DB

User defined control applications Multi-layer Path Computation Element (PCE) CE Management Agent

Routing Protocols (OSPF, BGP)

Signaling Protocols (RSVP-TE, CR-LDP)

Session Layer Signaling (SIP/SDP)

ForCES Interface (PL/TML) ForCES Protocol

Virtual Network Topology Manager Network resource manager

CE Manager FE Manager Performance Monitoring

Management Element (ME)

FE Management ForCES Interface (PL/TML) Agent Longest Network Link/Node Per-class-type Prefix Matching Address Performance queuing (LPM) Forwarding Translation Monitoring Packet Classification

Metering & Marking

Traffic shaper

Link/Node Fault Detection

Forwarding Element (FE)

Figure 2. Functional Blocks in QNOX for SDN/GOFN

D. Network Resource Discovery by ME When an FE is activated and initialized, it firstly connects to the ME based on the designated ME’s IP address for the subnetwork domain network, via its neighbor FE node. We are implementing UPnP-UP (universal plug and play with user profile) [21-22] -based protocol that forwards the association setup request message from an FE to the ME/CE by the neighbor FE node(s). When the newly initialized FE is connected to the ME through its neighbor, the ME establishes a secure connection with the FE, discovers the available network resources of the FE (e.g., number of interfaces and their capacities), provides additional information for the initialization (including the CE address of the FE). The FE then makes an association with the designated CE; after obtaining the information of network interfaces, the CE updates forwarding information base (FIB) for all existing FEs, and downloads the updated FIB to each FE, including the newly added one. The ME also collects the information on the links of the newly added FE, updates the network topology, and TEDB.

2012 IEEE/IFIP 4th Workshop on Management of the Future Internet (ManFI)

E. Calculation of inner routes and outer routes for each FEs The interfaces of an FE are connected to either internal link or external link, as shown in Figure 3. When the CE calculates and installs the forwarding information base (FIB) for each FE, there are five steps: i) calculate inner routes among FEs within the domain network controlled by the CE, ii) calculate outer routes that connect with outside external router or other domain network controlled by other CE, iii) merge the FIB table for each FE, iv) compare with its old version to find any updated FIB table entry, and v) download and install only the updated FIB table entries for individual FE. Firstly the CE calculates the inner routes for each active external interface from each FE by configuration of shorted path first spanning tree. When a link or network interface failure is occurred at any FE, the CE also updates the spanning trees, excluding the failed interface and link. If the CE receives any link state advertisement (LSA) from a CE in other domain network or external router, the routes to the connecting external interface should be calculated or updated for every FE individually. The overall performance of the fault notification from FE to CE, route calculation and FIB update at CE, selection and downloading FIB entries for individual FE from CE, and installation of the updated FIB entries at FE, should be limited to be less than 50 ms in order to guarantee the QoSprovisioning of carrier grade networks.

domain network A

domain network B

CE1

‫ڔډڋڌډڋڌډڍڔڌ‬

FE1_1

CE2

‫ڌڌډڋڌډڋڌډڍڔڌ‬

‫ڌډڌډړڑڌډڍڔڌ‬

‫ڍډڌډړڑڌډڍڔڌ‬

‫ڌډڋڌډړڑڌډڍڔڌ‬

FE1_2

FE2_1

repeats in all domain networks, and the PCE of the source domain network can select the shortest path from the multiple candidates of path delivered by the PCEP response messages through different routes.

Figure 4. Backward Recursive PCE-based Path Computation across Multiple Domain Networks

F. Multi-layer Overlay Networking For better scalability in QoS-aware traffic engineering and fault restoration, the proposed Generalized OpenFlow Network (GOFN) provides multi-layer virtual networking, as shown in Figure 5. Basically, two overlay networks are configured as preplanned infrastructure: (i) WDM/ASON layer network, and (ii) MPLS-TP layer network. MPLS-TP virtual networks are mostly used for QoS-aware traffic engineering and fast fault restoration, while WDM/ASON virtual networks are used for long distance transit networking with SRLG-disjoint working path and backup path configurations.

FE2_2

‫ڌډڌڌډړڑڌډڍڔڌ‬

External Router R5

External Router R6

‫ڍډڋڌډړڑڌډڍڔڌ‬

‫ڍډڌڌډړڑڌډڍڔڌ‬ ‫ړډڋڌډڋڌډڍڔڌ‬

FE1_3

‫ڌډڏډړڑڌډڍڔڌ‬

‫ڋڌډڋڌډڋڌډڍڔڌ‬

‫ڍډڏډړڑڌډڍڔڌ‬

FE1_4

FE2_3

FE2_3

Figure 3. Sample network topology with CE and FEs

When the virtual network topology of the requested service should be mapped across multiple domain networks which are controlled by different CEs, the path computation element (PCE) [18-22] of each domain network collaborates to find the optimum routes from the ingress node to the egress node using backward recursive PCE-based path computation, as depicted in Figure 4. The CE of the source domain network sends the QoS-aware data path setup request to the destination domain network, through all possible routes. The CE of the destination domain network then calculates constraint-based shortest paths from the entry boundary nodes (BNen) to the destination egress node. It then delivers the computed accumulated cost from each BNen to the PCEs of its upstream domain network, using PCEP (PCE protocol) [19] response message. The PCEs of the upstream domain networks calculate the accumulated cost of constraint-based shortest path from its entry boundary nodes to the destination egress node, through the selected boundary entry nodes in downstream domain networks. This procedure

Figure 5. Multi-layer Virtual Networking

The three virtual layer networks shown in Figure 5 are configured in client-server relationship. The CE of each domain network includes the functional modules of PCE (path computation element) and CC (connection controller) for each layer network. The CC is used to invoke any available signaling in the layer network, or to create a new Generalized OpenFlow in the GOF compatible switching nodes. The CKE maintains the up-to-date TEDB (traffic engineering database) of each layer network, while ME maintains the overall network

2012 IEEE/IFIP 4th Workshop on Management of the Future Internet (ManFI)

1171

status with the VNTM (virtual network topology manager). The TEDB is used in the policy-based routing in each overlay network, where the routing policy may be usual shortest path first, load balancing, equal cost multi-path (ECMP), or energy saving. The virtual overlay network for the requested SDN service is configured on the MPLS-TP virtual networks that provide appropriate QoS and fault restoration. As explained in Figure 1, the mapping between the virtual network topology of the requested QoS-aware SDN service and the virtual network topology of the provider’s GOFN topology is processed at the cognitive knowledge element (CKE). G. Virtual Network Embedding In the QoS-aware SDN service request, a virtual network topology with a collection of virtual nodes and virtual links are defined, and this virtual network should be mapped onto the substrate network (i.e., provider’s network). The virtual nodes are defined with attributes of requested cloud computing CPU power, amount of memory size, amount of storage, and any specific features on the node for content delivery network services. The virtual links are specified with attributes of required bandwidth, QoS (delay, jitter, packet loss, packet error) of the link and path. The fault restoration performance of each link and path is specified with performance parameters of fault restoration time, SRLG-aware working path and backup path installation, and protection type (e.g., 1:1, 1+1, 1:N, or M:N). The most important SDN services in future Internet are cloud computing service and content delivery service, and they must be provided for a user group in a certain location, the allowed distance (or delay) from the users and the server nodes may be defined. In the proposed QoS-aware QNOX for SDN/GOFN architecture, the CKE handles the virtual network mapping for any SDN service request. Currently, a simple mapping algorithm is implemented in CKE. Study on a more sophisticated and scalable mapping algorithm for realistic cloud computing and content delivery services with practical constraints is under development. H. End-to-End QoS Monitoring and Management Continuous monitoring of the end-to-end network QoS and assessment on the guaranteed QoS provisioning according to SLA/SLS are very important in the future Internet. In the proposed framework of the QNOX for SDN/GOFN, the network QoS monitoring functions are implemented at the ingress node and the egress node of the QoS-aware path shown in Figure 4. The parameters of the QoS monitoring include delay, jitter, packet loss, and packet error. Since MPLS-TP overlay networks are used for differentiated service provisioning, MPLS OAM (operation, administration, and maintenance) performance management functions are used to monitor the domain network QoS between the entry boundary node (BNen) and the exit boundary node (BNex), and the end-to-end network QoS between the ingress node and the edges node, as shown in Figure 4. The monitored results of the domain network QoS and the end-to-end QoS are sent to the CKE and stored in the

1172

knowledge base and TEDB. The CKE analyzes the overall QoS provisioning of the GOFN for QoS-aware SDN services and adjust the operational parameters of traffic engineering (e.g., policy of buffer management at each OpenFlow switch, link utilization level, and policy on routing). IV. IMPLEMENTATION OF QOS-AWARE NETWORK OPERATING SYSTEM FOR GENERALIZED OPENFLOW NETWORKS A. Generalized OpenFlow Forwarding Element (FE) and Control Element (CE) The Linux IP/MPLS router has been modified to emulate the proposed GOFN forwarding element (FE) and control element (CE). The OSPF routing module is separated from the IP/MPLS router packet forwarding, and implemented in the centralized CE. One CE is configured for a domain network of GOFN which contains 2 ~ 114 FEs. Basic IP packet routing and MPLS frame switching functions are included in the FE. Routing Information Base (RIB) configured by BGP/OSPF is maintained in the CE, while Forwarding Information Base (FIB) is calculated for each FE individually by the CE, downloaded to each FE. Figure 6 depicts the functional architecture of CE and FE based on Linux IP/MPLS router. The proposed architecture was tested with various test network topology with upto 114 FEs which are controlled and managed by 1 CE and 1 ME. Each 19 FEs are grouped and installed on a PC server (Intel(R) Xeon(TM) 3.20GHz, 140 GByte HDD, 12 GByte RAM, WinXP-64bit) where each FE is individually running on a virtual machine provided by VMware on Windows Server OS. One of the 19 FEs in the group is providing a direct link to CE that is shared by the 19 FEs. Each FE has a separated ForCES TML connection with CE for the control message exchange. The CE and The ME are configured to be running on a separated virtual machine.

other CE User Space

LSA OSPFd

BGPd

Quagga daemon

GMPLS/ RSVP-TE

Kernel Routing process

CE

FE User Space

iotcl

Kernel Routing process

VNTM

VNTM

PCE

PCE

Kernel socket

minimized OSPFd

rtnetlink

Mapping Server

FE Control manager for LSA

FIB FIB FIB

ForCES Forwarding daemon (IDLOCd)

Quagga daemon netlink

Kernel Space

ForCES Routing daemon ForCES PL/TML

netlink rtnetlink iotcl proc fs Kernel Space

ID/Loc mapping daemon

RIB

proc fs FIB cache

ForCES PL/TML Packet Forwarding MPLS-TP OXC

interface/port/tunnel

Kernel socket Link state manager to/from other FEs

Figure 6. Functional architecture of CE and FE based on Linux IP/MPLS open source

2012 IEEE/IFIP 4th Workshop on Management of the Future Internet (ManFI)

B. Management Element (ME) The key roles of ME are i) providing initial configuration of FE, ii) discoveries of available network resources from the newly attached FE, iii) continuous monitoring for FE status, and iv) management of virtual network topology. For automated initial configuration and resource discovery with a newly activated FE, we are developing UPnP-UP (universal plug and play with user profile) –based FE initialization protocol with enhanced authentication and authorization feature [21-22]. Once an FE is connected to ME, the ME can obtain device description in XML, configure basic operational parameters for the FE via SOAP (simple object access protocol)-based commands with specific actions and arguments. The information of the CE to which the FE should connect is provided by SOAP control message. For the managements of virtual network topology, the ME continuously monitors the operational status and utilization of network interfaces and links among FEs. Especially, the MPLS-TP overlay network for traffic engineering is monitored in detail. The collected link and traffic engineering data are provided to the CKE’s Link Status DB (LSDB) and traffic engineering DB (TEDB). C. Cognitive Knowledge Element (CKE) The CKE is maintaining the logical network topology of the multi-layer overlay networks, as shown in Figure 5. The CPU processing server node with memory capacity, large scale disk storage node, and content delivery server node are also included in the topology. Link state DB and traffic engineering DB (TEDB) are maintained with the up-to-date status information collected by the ME. The CKE calculates the mapping of the required virtual network from QoS-aware SDN service onto the provider’s logical network. Currently, only a simple mapping algorithm is used that considers limited aspects, such as bandwidth of virtual link, delay between virtual nodes, and delay between QoS-aware SDN user group and the server. More sophisticated and optimized mapping algorithm that considers additional QoS aspects is under study. D. Service Element (SE) The SE is implemented as a separate entity with GUI-based user interface and XML/JSON-based QoS-aware open virtual network programming interface (QOVNPI) to receive SDN service request. The service request defines SLA/SLS with following attributes: requirements of virtual node (CPU processing power, memory capacity, storage capacity, and optional specific features), users’ access points, average users’ access rates, and requirements of virtual links (bandwidth, QoS (delay, jitter, packet error rate, packet loss rate), fault restoration performance, security level). From the QoS-aware SDN service request, the SE configures virtual network, and asks the CKE to map the virtual network onto the substrate network resources. The SE also instantiates a service life-cycle management for the accepted QoS-aware SDN service, and allocates a QoE/QoS monitoring thread. The measured results of end-toend network QoS parameters are compared with the SLA/SLS parameters, and the QoS assessment results are provided to

CKE for policy decision making in operational parameters of overall traffic engineering. V.

PERFORMANCE ANALYSIS

A. Network Resource Discovery At the initial activation, each FE registers to ME using UPnP-UP protocol. The ME contains the role of UPnP provider function. In the test network topology shown in Fig. 8, the registration time is 5 ~ 6 ms, and service discovery of individual FE takes 5 ~ 6 ms. Retrievals of the detailed information of network resources (e.g., number of interfaces in each FE, link capacity, protection capability, switching functions) in the FE takes 150 ~ 550 ms. TABLE I. UPNP-UP BASED REGISTRATION AND RESOURCE DISCOVERY

Function Name FE Registration to ME Service Discovery Get detailed information of the discovered resources

Processing Time 5~6 ms 5~6 ms 150~550 ms

B. Calculation of inner routes and outer routes for each FEs by centralized OSPF In order to update the FIB table entries for each FE, the CE firstly calculates the spanning trees of each active external network interface that is connected to neighbor router. When an outer route is updated based on the OSPF Link State Advertisement (LSA) message exchanges with neighbor routers, the connecting external interface becomes the root of the spanning tree, and FIB table of each FE is updated accordingly. The time taken for calculations of the spanning trees and updates of the FIB tables of all FEs depends on the number of FEs and the number of total links in the domain network. Table II shows the route calculation time according to the number of FEs and number of links. After calculation of the FIB table for each FE, the CE compares the updated entries with its previous entries, selects only the updated FIB table entries, and downloads them to the individual FE. ROUTE CALCULATION TIME FOR FES

TABLE II. Number of FEs

Number of Links

19 38 57 76

42 87 132 177

Route Calculation Time [ms] 18.14 36.25 54.38 72.24

Downloading time for all updated FIB table entries [ms] 3.97 8.01 11.98 15.94

C. Scalability of CE and ME for Large Number of FEs In a domain network, a limited number of MEs should initialize and manage all FEs, and a limited number of CEs control all FEs. The resource discovery for a newly activated FE takes 150 ~ 550ms, as shown in Table I. The resource discovery operation in ME, however, is usually done only when there is a newly activated network node.

2012 IEEE/IFIP 4th Workshop on Management of the Future Internet (ManFI)

1173

The CE should calculate the FIB tables for all FEs if a link or network interface status changes, the CE should be able to process the route calculations and FIB table downloading efficiently, especially when the number of FEs and of links is large. From Table II, we can see the route calculation and FIB download time is increased according to the number of FEs and links. In the case of 76 FEs and 177 links, the route calculation time is 72.24 ms and the total FIB downloading time is 16 ms. So, if a CE controls less than 100 FEs with 4 ~ 10 links, the routes calculation and FIB table entry downloading will take less than 100 ms.

resource discovery in less than 1 sec, route calculation for a network of 100 FEs with 4 ~ 10 links in less than 100 ms, and fault notification and fault restoration in less than 60 ms. From the experiment, the proposed QNOX was verified to be applicable for carrier grade large scale provider transport network. REFERENCES [1] [2] [3]

D. Fault restoration at link/interface failure Fast fault restoration at link or network interface failure is essential in QoS-guaranteed service provisioning. In the distributed virtual router with CE-FE separation, the event notification to CE, re-calculation of routes avoiding the failed link, update FIB table entries for each FE, downloading and installation at each FE should be completed within a limited time. For example, QoS-guaranteed realtime multimedia service should limit the end-to-end delay within 150 ms, and the service disruption and jitter less than 50 ms. In the experiments of fault restorations at the test network with 114 FEs in two-dimensional mesh topology, the times taken for each fault restoration process are shown in Table III. From Table III, we can see the time for fault notification to CE is dependent on the number of hops between the FE and the CE. In the case of 1 hop, the fault notification takes around 1.85 ms, while in the case of 2 hop distance, it takes around 2 ms. The route recalculation time is dependent on the number of FEs and links. In the case of 76 FEs and 177 links, the route recalculation time is 50.27, and the total fault restoration time is 51.29 ms. So, if the number of FEs is beyond 50 and the number of links is more than 200, the total time for fault restoration time can exceed 50 ms that is the required fault restoration performance of carrier grade provider transport network. FAULT RESTORATION TIME

TABLE III. Number of FEs 19 38 57 76

Fault notification to CE in 1 hop [ms] 1.83 1.84 1.86 1.84

Fault notification to CE in 2 hops [ms] 2.04 2.01 2.01 2.02

[4]

[5]

[6] [7] [8]

[9]

[10]

[11] [12] [13]

[14]

Route recalculation time [ms]

Total fault restoration time [ms]

34.49 39.73 44.84 50.27

34.95 40.37 45.55 51.29

[15]

[16]

[17]

VI.

CONCLUSION

In this paper, we proposed a QoS-aware Network Operating System (QNOX) for Software Defined Networking with Generalized OpenFlows. The functional modules and operations for QoS-aware SDN service provisioning with the major components (e.g., service element (SE), control element (CE), management element (ME), and cognitive knowledge element (CKE)) are explained in detail. The current status of prototype implementation and performances are explained. The scalability of the QNOX was also analyzed through a series of systematic experiments. The QNOX provides network

1174

[18] [19] [20]

[21] [22]

Natasha Gude et al., “NOX: Towards an Operating System for Networks,” editorial note submitted to CCR. Arsalan Tavakoli et al, “Applying NOX to the Datacenter,” in Proc. of SIGCOMM Hotnet 2009. Dimitri Staessens et al., “Software Defined Networking: Meeting Carrier Grade Requirements,” in Proc. of IEEE Workshop on Local & Metropolitan Area Networks (LANMAN), 2011. Javier Rubio-Loyola et al., “Scalable Service Deployment on SoftwareDefined Networks,” IEEE Communication Magazine, December 2011, pp. 84-93. R. Ramjee et al., “Separating Control Software from Routers,” in Proc. of IEEE International Conference on Communication System Software and Middleware (COMSWA) 2006. Ram Gopal, “Separation of control and forwarding plane inside a network element,” in Proc. of IEEE , 2002, pp. 161-166. L.Yang et al, “Forwarding and Control Element Separation (ForCES) Framework,” RFC 3746, April, 2004. J. Hadi Salim et al., “SCTP-based Transport Mapping Layer (TML) for the Forwarding and Control Element Separation (ForCES) Protocol,” IETF RFC 5811, March 2010. Weiming Wang et al., “Design and Implementation of an Open Programmable Router Compliant to IETF ForCES Specifications,” in Proc. of IEEE International Conference on Networking (ICN`07), 2007. Nick McKeown, Guru Paulkar, Tom Anderson, Larry Peterson, Hari Balakrishnan, Jennifer Rexford, “OpenFlow: Enabling Innovation in Campus Networks,” http://www.OpenFlowswitch.org/documents/OpenFlow-wp-latest.pdf. Michael Jarschel et al., “Modeling and Performance Evaluation of an OpenFlow Architecture,” in Proc. of ITC 2011. Jad Naous et al., “Implementing an OpenFlow Switch on the NetFPGA platform,” in Proc. of ANCS08. James Kemp et al., “OpenFlow MPLS and the Open Source Label Switched Router,” in Proc. of 23rd Internaltional Teletraffic Congress (ITC), 2011, pp. 8-14. Saurav Das, Ali Reza Sharafat, Guru Parulkar, Nick MeKeown, “MPLS with a Simple Open Control Plane,” in Proc. of OSA/OFC/NFOEC 2011. Siamak Azodolmolky et al., “Integrated OpenFlow-GMPLS Control Plane: An Overlay Model for Software Defined Packet over Optical Networks,” in Proc. of ECOC Technical Digest, Optical Societ of America, 2011. Andrew R. Curtis et al., “DevoFlow: Scaling Flow Management for High-Performance Networks,” in Proc. of SIGCOMM’11, 2011, pp. 254-265. Tom Lehman et al., “Multilayer network: an architecture framework,” IEEE Communications Magazine, May 2011, pp. 122-130. IETF RFC 4655, A Path Computation Element (PCE) – based Architecture, Aug. 2006. IETF RFC 5440, Path Computation Element (PCE) Communication Protocol (PCEP), Mar. 2009. IETF RFC 5441, A Backward-Recursive PCE-based Computation (BRPC) Procedure to Compute Shortest Constrained Inter-Domain Traffic Engineering Label Switched Paths, Apr. 2009. UPnP Forum, http://www.upnp.org. Open Source UPnP SDK, http://upnp.sourceforge.net.

2012 IEEE/IFIP 4th Workshop on Management of the Future Internet (ManFI)