DYNAMIC RESOURCE MANAGEMENT APPROACH IN ... - CiteSeerX

1 downloads 0 Views 327KB Size Report
rate control mechanism, and illustrate its feasibility through its implementation on. MPLS-TE control plane of SSFnet/Glass simulator. Keywords: QoS, traffic ...
DYNAMIC RESOURCE MANAGEMENT APPROACH IN QoS-AWARE IP NETWORKS USING FLATNESS BASED TRAJECTORY TRACKING CONTROL Lynda Zitoune, Amel Hamdi, Hugues Mounier, Véronique Vèque Institut d’Electronique Fondamentale, Bât 220 Unviversité Paris Sud XI, 91405 Orsay cedex, France [email protected]

ABSTRACT In this paper, we present a reactive control policy which adapts the source bit rate to the reserved resources in order to ensure performance guarantees for multimedia applications. The proposed method called flatness based trajectory tracking deals with drastic traffic flow rate changes and limits the traffic in order to respect the time constraints. We show the contribution of the reactive control and the dynamic regulation using purely control theoretic approaches which stabilize the network and avoid undesirable oscillations for the transmission of such critical flows. Here, we present a performance analysis for such rate control mechanism, and illustrate its feasibility through its implementation on MPLS-TE control plane of SSFnet/Glass simulator. Keywords: QoS, traffic engineering, rate regulation, trajectory tracking appproach.

1

INTRODUCTION

The growth of multimedia applications over wide area networks has increased research interest in QoS (Quality of Service). Such applications need to find appropriate and available network resources and reserve them in order to support and guarantee their specific services. For instance, MPLS-TE (MultiProtocol Label Switching, Traffic Engineering TE) [1] intends to balance load over multiple paths in the network to minimize its congestion level. It is performed using QoS routing algorithms in order to meet QoS constraints, while at the same time optimizing some global, network-wide objective such as utilization [1], [2]. However, MPLS-TE provides no per-flow QoS guarantee [2], [3]. We claim that QoS routing is not enough in itself to guarantee accurate QoS requirements. QoS routing is able to provide some high level granularity of performance when traffic load is stationary. But, in case of high throughput variation, data flows can be affected by longer queuing delays or at worst by losses. So, we must resolve the congestion and service degradation problems where they occur, i.e., at the routers queues. We use packet flow control and queue management operations that control the buffer occupancy, same as the conditioning functions of DiffServ (Differentiated Service) [4] token bucket approach [5]. However, most developed works

conclude that token bucket shaping can experience unpredictable delays, packet reordering, and even losses when congestion occurs [6], [7], [8]. Therefore, to improve services offered to applications, we propose to add a flow control function to adjust the traffic arrival according to traffic specification to achieve QoS requirements. In this paper, we present a mathematical framework for rate regulation (adaptation) of multimedia applications, with the aim of matching QoS requirements and maximizing network resource utilization. We use a nonlinear approach of theoretical control named flatness based trajectory tracking and develop a reactive and an adaptive control method which stabilizes general network behavior, improves network resource utilization and ensures delay transfer requirements. The rest of the paper is organized as follow. In section 2, we give the main motivations for developing a reactive packet streams control. Section 3 explains the functionality of our trajectory tracking control. Section 4 is devoted to the flatness methodology and to the implementation of the approach based on our target environment. In section 5, we present the simulation results and their analysis that illustrate feedback control by the trajectory tracking approach. Finally, section 6 summarizes our rate analysis findings and concludes the paper.

Ubiquitous Computing and Communication Journal

1

2

MOTIVATIONS AND RELATED WORKS

2.1 Traffic Engineering Tools Many architectures and mechanisms have been proposed by the IETF (Internet Engineering Task Force) for enabling QoS, such as IntServ (Integrated Service) [9], DiffServ (Differentiated Service) [4], [10] and MPLS-TE (Multi-Protocol Label Switching, Traffic Engineering) [1], [11]. Two QoS issues are mainly addressed and developed using Traffic Engineering (TE) functionalities: resource allocation and performance optimization. To achieve Traffic Engineering goals, the utilization of the network resources is optimized by the Traffic Engineering process periodically. As defined by TEWG (Traffic Engineering Working Group) of IETF, the process of traffic Engineering consists of measuring, characterizing, modelling and control of the network resources. It encompasses the reliable expeditious movement of traffic through the network, the efficient utilization of network resources, and the planning of network capacity [1], [11]. In other terms, the main objective of Traffic Engineering is an efficient mapping of traffic demands onto the network topology to maximize resource utilization while meeting QoS constraints such as delay, jitter, packet loss rate and throughput. Using QoS routing mechanisms such as multipath routing, MPLS-TE avoids link saturation and achieves some link optimization. The general schematic of multipath routing algorithm consists of two steps: computation of multiple candidate paths and traffic splitting among these multiple paths like CBR Constraint-Based Routing (as summarized in [2], [3]. Each multi-path routing mechanism in the literature is declared very efficient by its authors but generally with restricted conditions [3]. A new routing paradigm that emphasizes searching for acceptable paths satisfying various QoS requirements is needed for integrated communication networks. QoS routing of MPLS-TE is a coarse-grained solution that offers many advantages to service providers. However, it provides no per-flow QoS guarantee. So, we must resolve the congestion and service degradation problems where they occur, i.e. at the router's queues.

such as queue management, conditioning and scheduling at switches and routers to deal with problems of congestion and service degradation. Token bucket conditioning approach checks the packet flow compliance with respect to the negotiated contract, by smoothing exceeding flow (policing and shaping) [5]. The token bucket is a source descriptor in terms of burst and mean rate. It gives a simplified model of the sources though it is not faithful to actual behaviour. The most advance works concerns the definition and the adaptation of the token bucket parameters [5], [6], and [7] in order to shape input traffic in accordance with negotiated traffic profiles. It is seen that token bucket shaping can experience unpredictable delays, packets reordering and even losses when congestion occurs (as demonstrated in [5], [6], and [8]. Using token bucket control, no deterministic service can be provided in the network. As a conclusion, we believe that token bucket is not a service guarantee mechanism because of two main drawbacks: 1) it is independent of router's buffer state, since token bucket parameters are specified based on a priori source models. 2) It is an open loop control which does not consider the input rate variations, i.e. the source bit rates during transmission. On the other side, a numbers of researchers have developed solutions to dynamically control the video bit-rates depending on available network bandwidth. Most of these solutions adopt TCP protocol for transporting video stream. They use its feedback loop control information to estimate network state, and subsequently determine the bit-rate to be used for converting and transmitting the video stream [13] So, to provide end-to-end QoS support over the network, we propose to add packet flow control to TE short term process. Unlike TCP feedback control used to adapt video stream, this study focus on developing dynamic bit-rate adaptation approach to enforce the network resource planning which is performed at the long term resolution level. We use flatness based control theory to develop this new monitoring method. 2.2 Why Feedback Control

The network optimization process is performed with three temporal resolution levels [11], [12]. The long term: as the network evolves with traffic demand growth, capacity management and planning are required to meet the traffic demands. In intermediate-time-resolution, QoS routing mechanisms are used as an important tool for resource control as mentioned earlier. The short time resolution level: some traffic engineering methods

Although steady-state characteristics can be well understood using queuing theory (e.g., as is done with capacity planning). Here, we use flatness based control theory to address dynamics of resource management, especially changes in network workloads and configuration. In other words, we are interested in designing a new network resource monitoring approach in order to

Ubiquitous Computing and Communication Journal

2

ensure performance characteristics of critical applications in MPLS-TE networks, especially delay transfer, bandwidth and queue's lengths. We consider network metric like queue length, as measured output and the traffic requirements or profiles as inputs.

considered here are buffers and bandwidth. Our controller monitors the router queue state according to the traffic requirements and regulates the incoming bit rate in a smooth manner. Indeed, the delivered service is more reliable and predictable with regard to network performance.

The control is performed by adjusting the network inputs, i.e. application’s input rates which affect buffer sizes with respect to the buffer planning. The need for regulatory control arises first to track the reserve capacity, and second to enforce the application service guarantees.

Classically, most papers consider the problem from the network dimensioning point of view: given an input stream and a scheduling policy, what is the worst case buffer requirement and what is the nature of the output stream? However, in our case the output stream and the buffer size are given and described by the trajectory. So, we the input stream to meet buffer constraints and to maintain QoS. We use a nonlinear theoretical approach to resolve this reverse problem, named flatness based trajectory tracking control [14], [15], [16].

Since the measured outputs are used to determine the control inputs, and the inputs then affect the outputs, the mechanism is called feedback or closed loop. Feedback control based on flatness notion is a powerful tool 1) to ensure that the measured output (buffer utilization) tracks very closely the reference input (buffer planning) even in the presence of disturbances, 2) to deal with network stability and other aspects of control performance, especially when changes in network workloads and configuration occur. 2.3 Our Work Objective Depending on network variations, we use a feedback mechanism to inform the sources when they exceed their profile and regulate their input rates in order to match the reserved network resource and to meet their QoS requirements. As a result, dynamic adaptation is provided between clients and their service provider. The purpose of this work is to present this new Traffic Engineering approach, which aims to optimize the network utilization and performance by intelligently handling the buffer reservations at the routers. We take benefit of traffic descriptors to model communication behaviours and QoS requirements by using trajectories. Trajectory would be a new way of mapping traffic demands over networks. The trajectory establishment or network resource planning translation into trajectories is out of the scope of this paper. Here, we are interested mainly in source bit rate regulation to meet QoS requirements and bounding packet delay. 3

RATE REGULATION SCHEME

Our method called Flatness-based trajectory tracking control is a network-driven intra-domain and inter-domain layer 3 bandwidth provision approach. We aim to prove its efficiency, when applied at the hotspot nodes such as the ingress nodes of large scale networks like Internet, and also, the nodes that aggregate flows of several LSPs (Label switched Path), in case of an MPLS-TE domain (as in figure Fig.1). Network resources

In the following, we first state the problem definition, and then briefly recall flatness notion and expose the trajectory tracking control method with reference to the target environment of figure Fig.1. The packet stream is processed and controlled at the node denoted Rc and, the router resources considered here are the buffer size and the bandwidth. 3.1 Trajectory Tracking Control Methodology QoS control requires an understanding of the quantitative parameters at the application and network layers. For example, from a traffic descriptor which describes the requested service, we deduce what will be the node buffer variation of the corresponding LSP and we define the buffer trajectory, denoted as qref (example in section 5.1). Recall that we assume a MPLS-TE architecture. For each incoming stream an LSP is established and, a buffer queue is created to support the corresponding packets. The critical router Rc of the figure Fig.1, collects the packets generated by n earlier routers in its buffer q(t). The incoming rates of these packets are denoted ui(t). The aggregated packets are served with some service bit rate r(q(t)) to the next hop following their LSP. If Rc is not able to handle all the incoming packets, the packets are either buffered to wait for transmission service or rejected in case of buffer saturation. The lack of feedback between adjacent routers may cause an excessive data loss and bad transmission service (delay violation) for these critical flows.

Ubiquitous Computing and Communication Journal

3

are identically satisfied. The preceding notion will be used to obtain an open loop control; that is control laws which will ensure the tracking of the reference flat outputs when the model is assumed to be perfect and the initial state conditions are assumed to be exactly known. Since this is never the case in practice, one needs some feedback schemes that will ensure asymptotic convergence to zero of tracking errors. Figure 1: Target environment of the developed work Our objective is to control the input stream’s bit rates ui(t) in order to meet the buffers, reserved in advance, at the router Rc and which we have modeled by a trajectory denoted by qref. Thus, the feedback control adjusts the ui(t) so that the packets sent by the clients are accepted at the router queue q. In other terms, the controller ensures that q(t) tracks qref(t), in order to maximize the utilization of the reserved resources; especially during transitions between lack/availabilty of buffers, to match the QoS requirement like packet delay and, to avoid bit rate oscillations and excessive loss. QoS is of particular concern for the continuous transmission of high-bandwidth video and multimedia traffic demands. We have used a fluid flow model to represent such bulk data transfer. 3.2 Flatness notion brief Recall Let us briefly recall the flatness notion for systems with a state x and controls u [14], [15] Definition 1. The system

x& = f ( x, u ) with x ∈ R and u ∈ R is differentially flat if there exists a set of variables, called a flat output, n

Our framework can thus be decomposed into two steps: 1.

2.

This two steps design is better than a classical stabilization scheme. The first step obtains a first order solution to the tracking problem, while following the model instead of forcing it (like in a usual pure stabilization scheme). The second step is refinement, where the error between the actual values and the tracked references is much smaller than in the pure stabilization case (see [14], [15], [16]). 4. Trajectory Tracking Control Implementation In the fluid flow paradigm, the physical evidence is that the rate of packet accumulation in the buffer is the difference between the packets inflow rate and the packets outflow rate. So for the model depicted in figure Fig.1, we obtain a differential equation describing the queue length variation of router Rc

m

y = h( x, u , L , u (r ) ), y ∈ R m , r ∈ N

Design of the reference trajectory of the flat outputs; off-line computation of the open loop controls. Inline computation of the complementary closed loop controls in order to stabilize the system around the reference trajectory.

n

q& (t ) = ∑ u i (t − hi ) − r (q (t ))

(1)

i =1

where hi is the delay between Rc and the previous hop.

Such that

x = A( y, y& , L , y

( ρx )

)

u = B( y, y& , L , y ( ρu ) ) With ρ an integer, and such that the system equations

A ( y, y& , L , y ( ρ +1) ) = f ( A( y, y& , L , y ( ρ ) ), dt B ( y, y& , L , y ( ρ +1) ))

The positivity of the buffer queue length as well as its maximum capacity are considered by describing the outflow rate r(q(t)) in terms of the contents of the buffers q(t) (see [6] ). We

take

r (q (t )) = µ

q (t ) which is (as a + q (t )

demonstrated in [17]) a positive bounded function of the load q and a monotonically increasing one. The parameter µ may be interpreted as the maximal processing capacity of the router. This relation is

Ubiquitous Computing and Communication Journal

4

obtained by assuming a linear relation between the residence time (queuing delay) and the buffer queue length. In case of M/M/1 queue, a=1 (see [17] for more details). To determine the input controls ui(t) of equation Eq.(1), we proceed as follows. We have considered that the router is composed of n virtual files (q1(t), …, qn(t)) which accumulate the packets coming from the n former hops respectively. These stored packets are released to the true file q (see figure Fig.2). The router output service rate may be expressed as

r (q (t )) = α i µ

qi (t )

.

n

Eq.(3):

ui (t ) = q&i (t + hi ) + α i µ

qi (t + hi )

Thus, for some reference trajectories qiref(t) of the reserved buffers, the former nodes output bit rates are defined by equation Eq.(4):

ui (t ) = q& iref (t + hi ) + α i µ

i =1

n

flows, with

∑α

i=1

i

= 1 . If the aggregated flows are

requesting for the same QoS, we choose

αi

=

1 n

to

ensure fairness between their competing packets.

qiref (t + hi ) n

(4)

1 + ∑ qiref (t + hi ) i=1

Which ensures the open loop tracking of qiref(t). 4.2 Flatness based control: closed loop control We now illustrate how to compute the closed loop control which ensures the tracking of the router buffer size qi(t) to the reference trajectory qiref(t), when the system becomes unstable. This is done by computing tracking error ei(t) such that ei(t) = qi(t) qiref(t). The feedback control law is computed in order to minimize the closed loop error dynamics such that ei(t) = - Kiei(t), so

q&i (t ) − q& iref (t ) = − K i (qi (t ) − qiref (t )) Replacing

Figure 2: Virtual model at router side Rc

(3)

1 + ∑ qi (t + hi ) i =1

1 + ∑ qi (t )

Where αi are weights for service scheduling of the n

n

(5)

q&i (t ) from equation Eq.(2) in equation

Eq.(5)), we have

The virtual model corresponding to the physical model (Fig.1) is treated as a composition of n differential equations describing the virtual queue variations (q1(t), …, qn(t)), summarized as:

ui (t − hi ) − α i µ

qi (t ) n

1 + ∑ qi (t )

− q&iref (t ) (6)

i =1

q& i (t ) = ui (t − hi ) − α i µ

qi (t ) n 1 + ∑ qi (t ) i=1

(2)

As cited earlier (section 3.2), our framework is decomposed into two steps: 1) off-line computation of the open loop controls. 2) Inline computation of the complementary closed loop controls to stabilize the system around the reference trajectories. 4.1 Flatness based control: open loop control The model described by the Eq.(2) is flat with qi(t) as a flat output. In other words, we get a complete parameterization of the system in terms of qi(t) and of a finite number of its derivatives. Thus, ui(t) is a nonlinear expression of qi(t) and its derivatives, as explicitly demonstrated below in

= − K i (qi (t ) − qiref (t )) ui (t ) = − K i e(t + hi ) + α i µ

qi (t + hi ) n

1 + ∑ qi (t + hi ) (7) i =1

+ q&iref (t + hi ) So the closed loop control given by the equation Eq.(7) ensures qiref (t ) tracking when instabilities occurs This dynamic control approach is extended to consider end-to-end QoS support over MPLS-TE networks (see [18] for more details).

Ubiquitous Computing and Communication Journal

5

5. SIMULATION AND RESULTS 5.1 Simulation Scenario Here, we present our trajectory tracking controller as we have implemented under SSFNet (Scalable Simulation Framework)/Glass (GMPLS Lightwave Agile Switching Simulator). SSFNet [19] is a collection of Java components for modeling and simulation of Internet protocols and networks at and above the IP packet level. Link and physical layers modeling can be provided in separate components, like Glass [20] which is a simulation engine that allows the modeling and performance evaluation of routing, restoration, and signaling protocols for optical networks. We have made two major modifications. Our controller is implemented as a new AQM (Active Queue Management) method at the interface level. The monitoring is done at intervals using timers. For each interval, the controller compares the queue occupancy, qsize, to the number of reserved buffers qref. Based on the difference, it computes the new input bit rate (using the discrete version of equation Eq.(7)), that the sender must use to release its stream. We have use a predictor (using standard Euler prediction) for estimating network variation to consider transfer delays hi. For signaling, we have added a new notification message for CR-LPD (Constraint-based Label Distribution Protocol) of Glass Simulator, to notify the LERs (Label Egress Router) of the new input bit rate. The trajectory tracking controller is tested on the scenario depicted in figure Fig.3.

tracking controllers. Two at the LSR 221 to handle LSP1 and LSP3, connecting (S1, C1) and (S2, C2) respectively. The others at LSR 220 to control LSP4 and LSP6, connecting (S3, C3) and (S4, C4) respectively. These controllers regulate the output rate of LERs 211, 213 and 210, 212 respectively. For these controllers, we have defined 4 reference trajectories corresponding to each LSP, i.e. qiref, i = 1, L ,4 as follows

q1,3ref (t ) = a1,3 + b1,3 ∑ tanh(c1,3 (t − (2 j + 1)T )) j

and

q4, 6 ref (t ) = a4, 6 − b4,6 ∑ tanh(−c4, 6 (t − (2 j + 1)T )) j

+ b4, 6 tanh(−c4, 6t ) with j, T ∈ Ν . The parameters a, b determine the quantity of data to be transferred during (2j+1)T period, and c is used to adjust the transition between these quantities. The parameter values are chosen to match with the input traffic demands and as the same time to guarantee the service performance. Mainly, to reduce delay variation and packet loss. The simulation results are obtained with parameter values summarized in Table 1, for a simulation time of 120sec and packet size of 1032bytes.

a b c T

Table 1: Simulation parameters q1ref q2ref q3ref q4ref Rate 473 350 235 235 15Mbps -73 100 -35 -35 for LSR 220 -0.5 0.6 -0.5 -0.5 30Mbps for LSR 221 22

These values are chosen in order to bound the queuing delay to 200ms. Also, we chose the hyperbolic tangent function because we think it represents quite ON-OFF traffic.

Figure 3: Simulated network topology Our network simulation implements a MPLS-TE architecture that configures ingress routers (Label Edge Router LER), core routers (Label Switch Router, LSR), and the 4 corresponding LSPs that connect clients to servers respectively (Si, Ci). The link connecting LSRs (Label Switch Router) 220 and 221 is the critical one, because it must support the 4 LSPs. So, we have implemented four trajectory

The reference trajectories are shown in Fig.4. We depict four phases of buffer planning defined by parameter T. For example, in [0, 22] sec interval, 200packets can be stored for LSP4,6, 550packets for LSP3 and 550packets for LSP1. During [22, 42] sec, 270packets for LSP4, 200packets for LSP6, 550packets for LSP1 and 350packets for LSP3, and so on. 3.2 Simulation Results The trajectory tracking control ensures the tracking of the reference flat output qiref (figures Fig.4, Fig.5). For example, for t Є [0, 22] sec, the queue lengths of LSP4 and LSP6 are about 160packets compared to the reserved buffers for

Ubiquitous Computing and Communication Journal

6

these paths which are about 200packets. In other words, when transition occurs, the flatness controllers increase or reduce the LERs output rates (figure Fig.6) in smooth manner, without affecting the transmission service as shown in figures Fig.7, Fig.8. The queuing delays at the LSPs queues qi are bounded by 190ms, as well as the loss ratio by (