An Iterative Approach to Comprehensive ... - Semantic Scholar

3 downloads 1050 Views 327KB Size Report
ing techniques; performance attributes; F.2.m Computer Network Routing Protocols]; ... Integrated services networks have new characteristics not present in ...
An Iterative Approach to Comprehensive Performance Evaluation of Integrated Services Networks Ibrahim Matta, A. Udaya Shankar Institute for Advanced Computer Studies and Department of Computer Science University of Maryland College Park, Maryland 20742 CS-TR-3235 UMIACS-TR-94-28 March 1994

Abstract

Future networks are expected to integrate diverse services. For this purpose, new algorithms and protocols have been proposed for link scheduling, admission control, and routing. The interaction between these three components is crucial to the performance of the network. However, this interaction is dicult to model realistically using available techniques. In this paper, we present an iterative discrete-time approach that yields a realistic model which takes into account this interaction. The model applies to connection-oriented networks with di erent types of real-time connections. It describes network dynamics in terms of a system of di erence equations. These equations can be solved numerically, allowing the investigation of various control schemes for both transient and steady-state performances. Preliminary results indicate that our approach is computationally much cheaper than discrete-event simulation, and yields suciently accurate performance measures. We use our model to compare the performance of di erent routing schemes on the NSFNET backbone topology with a weighted fair-queueing link scheduling discipline and admission control based on bandwidth reservation. We show that a routing scheme that routes connections on paths which are both under-utilized and short (in number of hops) gives the highest network throughput.

Categories and Subject Descriptors: C.2.1 [Computer-Communication Networks]: Network Architecture and Design|packet networks; C.2.2 [Computer-Communication Networks]: Network Protocols|protocol architecture; C.2.m [Routing Protocols]; C.4 [Performance of Systems]: modeling techniques; performance attributes; F.2.m [Computer Network Routing Protocols];

This work is supported in part by ONR and DARPA under contract N00014-91-C-0195 to Honeywell and Computer Science Department at the University of Maryland, and by National Science Foundation Grant No. NCR 89-04590. The views, opinions, and/or ndings contained in this report are those of the author(s) and should not be interpreted as representing the ocial policies, either expressed or implied, of the Defense Advanced Research Projects Agency, ONR, the U.S. Government or Honeywell. Computer facilities were provided in part by NSF grant CCR-8811954. 

Contents

1 Introduction 2 Model and Solution Procedure

2.1 Calculating Blocking Probabilities : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2.2 Relaxing the Steady-State Assumption : : : : : : : : : : : : : : : : : : : : : : : : : :

3 4 5 6 7 8

Scheduling and Admission Control Routing Numerical Results for Single Link Numerical Results for a Tandem Network Numerical Results for NSFNET Conclusions and Future Work

i

1 3 5 7

9 11 13 14 14 16

1 Introduction Integrated services networks have new characteristics not present in traditional networks (e.g., telephone or data networks). These characteristics include increased transmission speeds, diversity of applications (e.g., multi-media, voice, mail), and support of heterogeneous quality of service (QoS) requirements. This has given rise to new algorithms in all aspects of network resource allocation, including link scheduling, admission control, and routing. Link scheduling de nes how the link bandwidth is allocated among the di erent services. Admission control de nes the criteria the network uses to decide whether to accept or reject a new incoming application. Routing concerns the selection of routes to be taken by application packets to reach their destination. However, the interaction between these three components, although crucial to the overall performance, is not well-understood. Traditional approaches to evaluation appear ine ective in closing this gap. In this paper, we present an iterative discrete-time approach that addresses this gap, making use of approximations to instantaneous blocking probabilities. Our approach integrates techniques from several areas, namely: (1) standard queueing theory techniques [26]; (2) the link decomposition technique widely used for packet delay analysis in packet-switched data networks [25] as well as for call blocking probability analysis in circuit-switched telephone networks [15]; (3) the dynamic ow technique used for approximating system dynamics and nonlinearity [12]; and (4) the technique of repeated substitutions used in numerical analysis to solve nonlinear equations [24]. Our approach yields a model that takes into account the interaction between link scheduling, admission control, and routing. In particular, our model permits the evaluation of a connectionoriented packet-switched network (e.g. ATM) that provides di erent types of real-time services (e.g. voice, video) by making use of various link scheduling, admission control and routing policies. Thus this model can be applied to achieve more comprehensive evaluation of existing strategies and to propose more e ective network control schemes.

Outline of the model and solution method We consider a network that provides real-time services between pairs of source and destination nodes. The connections of a service i arrive at the service's source node according to a Poisson process of rate i , and requiring an end-to-end QoS requirement Di (e.g. delay). Each connection, once it is successfully set up, has a lifetime of exponential duration 1i . The source node uses its routing information to choose for the arriving connection a potential route1 to the service's destination node. The route and service together de ne the class of the connection. We use the index k to range over classes. Let Rk denote the route of a class-k connection. The arrival rate of class-k connections, denoted by k , is a function of i and the routing algorithm. 1

We use the terms route and path interchangeably.

1

A class-k connection of service i requests a local QoS requirement Dkj from each link j 2 Rk , such that the aggregate of Dkj , j 2 Rk , satis es Di , its end-to-end QoS requirement. An arriving class-k connection that nds insucient resources on any link j 2 Rk to satisfy Dkj is blocked and lost. Otherwise, the connection is accepted and resources are allocated to it on each link j 2 Rk for an exponential duration 1k = 1i . We are mainly interested in calculating the end-to-end connection blocking probability of each service (or equivalently the service's throughput). An intermediate step in this calculation is to compute the end-to-end blocking probability of each the service's classes. For this, we compute B~kj (t), an approximation to the instantaneous probability that a class-k connection is blocked on link j 2 Rk (this computation is discussed below). Invoking the link independence assumption and using the fact that a class-k connection can be successfully setup on Rk i it is not blocked on all links j 2 Rk , we compute the class-k end-to-end instantaneous blocking probability as Q 1 ? j 2Rk (1 ? B~kj (t)). Computing B~kj (t) is the crux of the problem. We proceed as follows. Let ?j be the set of all classes of connections using link j . De ne the state of link j by the number of connections in each class k 2 ?j . For a given link scheduling algorithm, we nd the set S j of link states at which Dkj is satis ed for every k 2 ?j . S j is called the schedulability region of link j . Next we compute B~kj (t) for any xed t by iterating on \instantaneous" versions of two steadystate results, where the actual trac intensities kk are replaced by ctitious trac intensities zkj (t) that we introduce. The rst steady-state result is that the steady-state blocking probabilities B~kj can be obtained in terms of kk by solving the j?j j-dimension Markov chain over S j . The instantaneous version of this result yields B~kj (t) in terms of zkj (t). The second steady-state result is that xjk , the steady-state average number of connections in class k, satis es xjk = kk [1 ? B~ kj ]. The instantaneous version of this result is xjk (t) = zkj (t)[1 ? B~kj (t)]. Thus for any xed t, we can use an iterative procedure on the two instantaneous results to obtain B~kj (t) and zkj (t) in terms of xjk (t). Finally, to compute B~kj (t) for all t, we introduce di erence equations which describe the dynamic behavior of the xjk (:)'s. Each equation expresses xjk (t +  ) as a function of xjk (t) and B~kj (t) for j 2 Rk . These can be solved iteratively in conjunction with the previous solution (of B~kj (t) in terms of xjk (t)). We can thus obtain the dynamic behavior of the end-to-end connection blocking probabilities. This allows the investigation of both transient and steady-state performances of various control schemes. Our preliminary results indicate that our approach is computationally much cheaper than discrete-event simulation, which requires the averaging of a large number of independent simulation runs. Furthermore, the performance measures it yields are very close to the exact values obtained by simulation. In this paper, we present results comparing the performance of di erent routing schemes on the NSFNET backbone topology with a weighted Fair-Queueing link scheduling discipline [34]. Our 2

results indicate that a routing scheme that selects paths which are both under-utilized and short (in number of hops) for routing new incoming connections gives the highest network throughput.

Organization of the paper The rest of the paper is organized as follows. In Section 2, we formulate our discrete-time model and solution procedure. Section 3 discusses scheduling and admission control in more detail. Speci cally, we adopt a particular link scheduling algorithm to illustrate how its e ect is incorporated in our model. Section 4 discusses routing and how our model can capture the e ect of various schemes. Numerical results to validate our model are presented in Sections 5 and 6; Section 5 contains results for a single link, and Section 6 for a tandem network. Section 7 investigates three routing schemes on the NSFNET backbone topology. Section 8 concludes and identi es future work.

2 Model and Solution Procedure

ice Vo

Vi

de

o

2

de

st

in

3d

at

es

io

n

tin

at

io

n

We consider networks of arbitrary topology o ering heterogeneous real-time services using a connectionoriented reservation scheme. Figure 1 shows an example network o ering three services: one voice service from node 1 to node 2, one video service from node 3 to node 2, and one voice service (also) from node 3 to node 2.

Voice 1 source , 1 , D , ......) 1 1 1

class 1

λ µ

class 4

2

cla

ss

ss

cla

: arrival rate of setup requests

3

6 cla

ss

5

µi : average life time of connections

2s

ou

, D2 , ......)

ou rce 3s ice

2

eo

λ2, µ1

(

λ3, µ1

, D 3 , ......) 3

Vo

(

rce

D : requested max end-to-end delay i

Vid

1/

ss cla

λi

Voice 1 destination 2

1

3

(

Figure 1: A network example. 3

A service is characterized by its source node, destination node, and trac parameters and requirements. The trac parameters and requirements of a service i include the following:  Arrival rate of requests for a connection setup (i).  Average life time of a connection from the time it is successfully setup until it ends ( 1i ).  QoS requirements of a connection such as an upper bound on the end-to-end packet delay (Di ).  Packet (or cell) generation characteristics of a connection such as its mean and peak packet rate. Throughout the paper, we assume Poissonian arrivals of setup requests and exponential connection life times. Also, for illustrative purposes, we only consider packet delay as the QoS requirement, and we assume that this delay does not include the propagation delay2 . The connections of a service can potentially be setup along any of the possible routes between the service's source node and the service's destination node. For a service, we let the connections setup on each of its routes to belong to a di erent class. For the network in Figure 1, we have six classes. Each one of the three services has two routes for connection setup: a one-link route and a two-link route. The voice 1 service has routes (1; 2) and (1; 3; 2); hence for this service we de ne class 1 for route (1; 2) and class 2 for route (1; 3; 2). Similarly, we de ne classes 3 and 4 for the video 2 service, and classes 5 and 6 for the voice 3 service. We note that for a connection to be successfully set up on a multi-link route, the sum of the maximum (local) delays guaranteed by the links constituting the route should satisfy the end-to-end delay requirement of the connection. Each link in the network is used by a di erent set of classes. In Figure 1, link (3; 2), for example, is used by three classes, namely classes 2, 3 and 5. We de ne the state of a link by the number of connections in each class using the link. Every link is constrained to be only in a state where the QoS requirements of all connections setup on the link are satis ed. We say that this set of states satis es the \QoS constraints". This set of states is referred to as the schedulability region of the link [22]. For a connection setup request on a multi-link route, we divide the requested end-to-end delay equally among the links; each link should thus guarantee the same maximum delay value for the connection. This is called \equal allocation" policy [32, 33]. For example, in Figure 1, link (3; 2) should guarantee D3 for every connection in class 5, D2 for every connection in class 3, and D21 for every connection in class 2. With this policy, given any network and services, it is then easy to determine for each link the set of classes using it along with their local QoS requirements. Because a class is de ned by the pair (service, route), it appears that we can have a large number of classes (i.e. routes). This may cause a computational bottleneck. To avoid this, we can restrict the set of routes from which the source node can choose to only shortest (in number of hops) 2

This QoS requirement is also referred to as packet jitter [11, 36].

4

and close to shortest paths.3 This is acceptable because using a longer path for a connection ties up resources at more intermediate nodes, thereby decreasing network throughput. Furthermore, it also ties up more resources at each intermediate node because satisfying the end-to-end QoS requirement now requires more stringent local QoS requirement. Section 4 addresses the selection of routes in more detail. We assume that the packet generation characteristics of a connection setup on a multi-link route do not change from link to link, i.e. remain the same as the given external characteristics. This assumption is often made to reduce complexity (e.g., [17, 27]). Given the link scheduling algorithm and knowing the packet generation characteristics of connections, we should then be able to determine, subject to the local QoS constraints, the number of connections from each class that can be accepted (setup) on a link using some packet level analysis (e.g., [17, 7]). The result is thus a set of link states satisfying the local QoS constraints, i.e. the link's schedulability region. Note that each link will typically have a di erent schedulability region because links have di erent capacities, are used by di erent sets of classes, etc. Section 3 explains this in more detail. We assume that the network uses a link-state routing algorithm, where each node maintains a view of the whole network topology4 . This view is updated by periodic broadcasts by nodes of the status of their outgoing links. For ease of presentation, we assume that broadcasts of all nodes are synchronized and that they reach other nodes instantaneously (it will be apparent later that we can easily model unsynchronized broadcasts). After each update, a node uses its new view to compute new routes to be used for incoming connections until the next broadcast. Routes are thus updated at discrete time instants nT; n = 1; 2;    ; where T is the time interval between two successive broadcasts. Without loss of generality, we assume T = 1. We assume a request for a connection setup on a multi-link route is sent to all links of the route simultaneously. For example, if during some period [n; n +1) only route (1; 3; 2) is used for incoming connections of voice 1 service, then the arrival rate of connection setup requests for class 2 on both links (1; 3) and (3; 2) during [n; n + 1) equals 1. Knowing the routes used in a period [n; n + 1), it is then easy to determine, for each link, the arrival rate of setup requests during [n; n + 1) for the di erent classes using the link.

2.1 Calculating Blocking Probabilities Let's assume for now that the network reaches steady-state between two successive broadcasts (or routing updates). (This assumption is not realistic since it requires T  1=. We relax this assumption later in Section 2.2.) Considering a Markov chain over the states of the link belonging to its schedulability region, we can compute the (steady-state) blocking probability during Experiences with circuit-switched networks show that this restriction results in simple and ecient routing schemes [3, 31]. 4 Link-state routing algorithms are often proposed for integrated services networks (e.g., [1, 4, 8]). 3

5

[n; n + 1) for each class using the link, i.e., the probability that a connection for that class can not be admitted on the link since otherwise the local QoS constraints would be violated and hence the end-to-end QoS requirements would not be guaranteed. Consider, for example, a simple situation of a single link used by two classes X and Y . In this case, the state of the link is a 2-tuple (x; y ) representing the number of connections in each class that the link can support. Let S = f(x; y) : (0; 0); (0; 1); (1; 0); (1; 1)g be the link's schedulability region. Also, let the arrival rates of connection setup requests for the two classes be X and Y , respectively. Let 1X and 1Y be the corresponding average life times of connections. Figure 2 shows the Markov chain on S . The blocking probability for class X for instance is equal to the sum of the probabilities of being in states (1; 0) and (1; 1). λY

0,0

µY

µ X λX

µX λX 1,0

0,1

λY

1,1

µY

Figure 2: A Markov chain example. From the blocking probabilities on all links along the route of a class, we can determine the endto-end blocking probability of the class (or equivalently its throughput) during [n; n + 1), assuming link independence and using the fact that a connection can be setup on the route if and only if it is not blocked on all links of the route. In particular, if Bkl is the class k blocking probability on link l where link l lies on route Rk of class k, then the class k end-to-end blocking probability equals Q 1 ? l2Rk (1 ? Bkl ). We assume the following: A connection setup request is tried on the route selected for the connection by the source node. If the setup fails at any node along the route, the request is considered lost (rejected) and the connection blocked. The setup request is not attempted on another (alternate) route. We assume that the information a node broadcasts to other nodes at time n + 1 consists of the throughputs on the outgoing links during [n; n + 1) (and possibly other measures as will be discussed in Section 4). It is based on this information that new routes are computed for the next interval [n + 1; n + 2). To summarize, our solution to evaluating the performance of the network is iterative: (i) Given the routes to be used in one routing update period, compute blocking probabilities and throughputs; (ii) Given the throughputs, compute the routes to be used in the next routing update period as 6

de ned by the routing algorithm5 . Recently, we have used a similar iterative discrete-time approach to study the interaction between link scheduling and routing in dynamic connectionless (datagram) networks [28, 29]. We point out that our approach takes into account the e ect of the routing update period (i.e. the delayed feedback between route changes and link load changes). This is in contrast to iterative procedures used in circuit switching studies (e.g., [5, 6, 16, 30, 13]) to solve for the steady-state performance only. In these studies, the e ect of the routing update period is ignored.

2.2 Relaxing the Steady-State Assumption In our previous discussion, we assumed steady-state between two successive routing updates to compute the link blocking probability for each class using the link. Indeed this assumes that T  1 , which is not realistic. Typically, T < 1 . Hence, we need to somehow compute instantaneous link blocking probabilities instead of steady-state blocking probabilities.

The case of a single link with single class We make use of a technique that was previously used in [13] in the context of single-rate circuitswitched networks6. In particular, [13] uses the following di erential equation for x(t), the average number of calls carried on a single link: x_ (t) = ?x(t) + [1 ? B(t)] (1) where  is the arrival rate of calls to the link, 1 is the average life time of a call, and B (t) is the instantaneous call blocking probability. Equation (1) can be viewed as a uid equation where x(t) is the rate of output ow and [1 ? B (t)] is the rate of input ow. We don't know an expression for B (t). We know that at steady-state, it can be obtained from the Erlang-B formula [26], i.e., B = E (  ; C ) where  is the o ered trac intensity, C is the maximum number of calls the link can support (i.e. total number of channels), and (  )C =C !  E (  ; C ) = PC  n n=0 (  ) =n! We also know that, at steady-state (when x_ = 0), x =  [1 ? B ] where x is the steady-state value of x(t). An approximation of B (t), denoted by B~ (t), can be obtained as follows: x(t) = z(t)[1 ? E (z(t); C )] B~ (t) = E (z(t); C ) (2) As noted earlier, route selection is discussed in Section 4 in more detail. A similar technique was also used in [12, 14]. In single-rate circuit-switched networks, every connection or call requests one circuit or channel on each link along its route. 5 6

7

The above approximation introduces z (t) as a ctitious instantaneous trac intensity, so that the Erlang-B formula which is known to hold in the steady-state case is used as well in the transient case.7 Knowing x(t) at some xed t we can solve equation (2) for B~ (t) using the iteration: z(q+1)(t) = 1 ? E (xz((tq))(t); C ) q = 0; 1; : : : (3) We start with some z (0) (t) until z (q)(t) stabilizes and converges to z (t). Then, we have B~ (t) = E (z(t); C ). Note that B~ (t) is a function of x(t). By numerically integrating equation (1), we can now obtain the time behavior of x(:), and hence B~ (:).

The case of a single link with multiple classes We can apply the same technique described above to our problem in order to obtain the instantaneous link blocking probabilities. First, consider the simple situation of a network consisting of only one link. Let ? be the set of classes using the link. Let xk (t) be the average number of connections in a class k 2 ? at time t. Let x(t) = (x1 (t);    ; xj?j(t)). Let k be the rate of setup requests for class k, and k be the average life time of a class k connection. Then we can write the following di erential equation (similar to the di erential equation (1)), for every k 2 ?:

x_k (t) = ?k xk (t) + k [1 ? B~k (t)]

(4)

B~k (t) represents the instantaneous blocking probability seen by class k connections. It is estimated using the following equations, where the zk (t)'s are ctitious trac intensities: xk (t) = zk (t)[1 ? B~k (t)]

for all k 2 ?

(5)

We can solve iteratively for the B~k (t)'s and the zk (t)'s as in the single class case. In a manner similar to the use of the Erlang-B formula in equation (2), we can compute B~k (t), for every k 2 ?, in terms of the zk (t)'s, by solving (for the steady-state blocking probability for each class k) the Markov chain over the link's schedulability region S . In particular, the probability P ( ) of being in   P Q Q j ?j zk k (t) ?1 j ?j zk k (t) a state  = (1 ; 2;    ; j?j) 2 S is equal to P (0) k=1 k ! where P (0) = f 2S k=1 k ! g P is the normalization constant [26]. Then B~k (t) = 2S 1(k + 1 62 S )P ( ). This result can be used in equations (5) to iteratively solve for the zk (t)'s as in equation (3). This computation seems expensive especially because of the size of S , and the dimension of the Markov chain which is equal to the number of classes using the link. We introduce a simpli cation in Section 3 for the link scheduling algorithm we consider to reduce the computational complexity. 7

The use of the Erlang-B formula in this technique re ects the inherent stochasticity of the system [14].

8

The case of multiple links, multiple classes Now consider the situation of a general network with multiple links. We divide the interval [n; n +1) between two successive routing updates into subintervals of length  each. We write for every link j a di erence equation (similar to the di erential equation (4)) using  as the discrete-time step. Let L be the set of network links, i.e., j 2 L. Let ?j denote the set of classes using link j . Let xjk (:) be the average number of connections in a class k 2 ?j . Then we can write for every k 2 ?j : Y [1 ? B~ l (n + i )] xjk (n + (i + 1) ) = (1 ? k  )xjk (n + i ) + k  k link l 2Rk

i = 0; 1;   ; 1 ? 1

(6)

The rst term in the right-hand side of equation (6) represents the average number of class k connections setup on link j which remain on link j (i.e. do not terminate). The second term represents the average number of new class k connections setup on link j during [n + i; n +(i +1) ). k is as de ned before, i.e., it denotes the rate of setup requests for class k. In a general network, k depends on the routing algorithm, and here we use its value for the interval [n; n + 1). B~kl (:) represents the blocking probability seen by class k connections on link l where link l lies on route Rk of class k. (Obviously link j also lies on Rk .) Note that the product term Q re ects the fact that a connection can be setup on the route if and only if it is not blocked on every link of the route (this invokes the link independence assumption). For every link l 2 L, B~ cl (:) can be estimated from the following equations: xlc (n + i ) = zcl (n + i )[1 ? B~cl (n + i )] for all c 2 ?l (7) Again, equations (7) are solved iteratively similar to equations (5), i.e. involving a multi-dimensional Markov chain over l's schedulability region. In Figure 5, algorithm 1 summarizes our evaluation method as described to this point. Algorithm 2 shows details for step 9 in algorithm 1, illustrating the computation of the instantaneous blocking probabilities through equations (7). We denote by R(n) the routing pattern during [n; n + 1), i.e. the routes used for incoming connections during that period. We also denote by xl(t) the vector representing the average number of connections in each class using link l at time instant t.

3 Scheduling and Admission Control In Section 2 we assumed that a connection has an end-to-end delay requirement D, where D is the maximum packet delay. This is called a deterministic delay bound in the literature [10, 11]. Henceforth, instead of the deterministic delay bound, we assume a connection requests an end-toend statistical delay bound (D; ), i.e., Prob [ packet delay > D ] < . This is typically required 9

by applications such as voice since they can tolerate some packet loss (a packet is considered lost if its delay exceeds D) [10, 11]. If a connection with an end-to-end delay requirement (D; ) is to be established on an n-link route, then we require that each link on the route guarantees ( Dn ; n ) [32, 33], which now becomes the local QoS requirement. We now illustrate how to nd the schedulability region S of a link under a scheduling algorithm we adopt, i.e. the set of all link states that satisfy the local QoS constraints, where a link state denotes the number of connections in each class using the link. We assume a \per-connection" link scheduling algorithm of the weighted round-robin type8, where each connection is allocated (and guaranteed) a certain amount of link bandwidth required for its local delay requirement to be satis ed. This required bandwidth depends of course on the local delay requirement (D0; 0) of the connection. It also depends on the packet generation characteristics of the connection. For a connection described by a two-state model where the connection is either in a busy state sending packets back-to-back at peak rate or in an idle state sending no packets at all, the required bandwidth9 , denoted by C^ , can be obtained from the following approximation derived in [2, 17, 9]: p[ ? X ]2 + 4X ? X + C^ = R (8) 2 where  R is the peak rate of the connection.  m is the mean rate of the connection.  b is the average duration of the busy period.  = ln( 10 )b(1 ? )R.   = mR is the probability that the connection is active (in busy state).  X = D0  C^ is the bu er space required by the connection. Note that for a given connection, C^ can be computed from equation (8) iteratively. For each class using the link, we can then determine C^ and X for a connection in this class. Knowing these C^ 's and X 's, we can determine whether a state belongs to the link's schedulability region; it must satisfy the following two conditions:  The sum of the C^ 's at this state is no greater than the link capacity C .  The sum of the X 's at this state is no greater than the total available bu er space of the link. Assume that there is enough link bu er space such that the second condition is always satis ed, then for a state to be admissible it suces to only satisfy the rst condition. This last assumption greatly simpli es the link model by allowing us to view the link state as belonging to the set f0; 1; 2; : : :; C ? 1; C g, where the state indicates the current amount of capacity 8 9

An example of this type of scheduling algorithms is weighted Fair-Queueing [34]. Often referred to as e ective or equivalent capacity [9, 23, 2, 17, 1].

10

used. An arrival of a setup request for a class using the link requests some C^ to occupy for its life time; we assume C^ 's are integers expressed in units of the link capacity. This link model has a simple solution in the multi-rate circuit switching literature [35].10 In particular, let class k connections require an e ective bandwidth of C^k . Let Q(i) denote the steady-state probability of being in state i. Then Q(:) satis es the following recurrence relation [35]:

iQ(i) =

X k C^ Q(i ? C^ ) k

P

k

k

k

i = 1; : : :; C

(9)

where Ci=0 Q(i) = 1. P The steady-state blocking probability for class k connections is equal to Ci=C ?C^k +1 Q(i). This one-dimensional link model can be used to compute the instantaneous class blocking probabilities B~kl (:) in equations (6) and (7). In particular, we compute the B~kl (t)'s using equations (9) after replacing kk by ctitious trac intensities zkl (t). As before, we can obtain the zkl (t)'s by solving equations (7) iteratively. Note that here the link schedulability region is implicitly de ned by the constraint 0  i  C . This link model is usually referred to as the stochastic Knapsack model [5, 6].

4 Routing Recall that we assume each node periodically computes new routes based on its current view of the whole network. This view consists of the network topology and load during the last period. The load information consists of link/path measurements, which may include quantities such as the average reserved link capacity and path blocking probability. These quantities should obviously be measurable in practice; indeed a node can measure the average reserved capacity for each of its outgoing links from the connection setup/teardown procedure. Also, a source node can measure the blocking probability of a path if we assume that when a setup fails at an intermediate node, this node sends a \reject" message back to the source. We should also be able to obtain these quantities from our model. We can obtain the average reserved link capacity from the link state and the e ective capacity for each of the link's classes, which we compute in our model. We can also obtain a path blocking probability from the classes' blocking probabilities, which we also compute in our model. Recall that if a connection setup request fails at any link on the route selected by the source node, then the request is considered lost (rejected) and the connection blocked. The setup request is not attempted on another (alternate) route. We are interested in route selection algorithms for In a multi-rate circuit-switched network, each call may request a di erent number of channels. This number is the same on every link along any route the call might take. This is not the case in the networks we are considering where the bandwidth required by a connection on a link depends on the number of links along the route taken by the connection. 10

11

networks of arbitrary topologies and o ering heterogeneous services. We want algorithms that result in low blocking probabilities (a high successful setup rate) and hence high network throughput. We next list some design choices when developing such algorithm.

 What is the set of candidate paths the source node would consider for connection routing ? We do not want the source node to consider paths that are too long since this would result in increased utilization and hence reduced throughput. So the set of candidate paths could consist of only minimum hop paths, or it could consist of both minimum hop paths and next-to minimum hop paths11.12

 From the set of candidate paths, which path to use for routing the setup request message for a new incoming connection ? A path p could be selected at random from the set, or it could be selected probabilistically using path weights Wp where13 (10) Wp / (1 ?H Bp)L Fp p p where Bp denotes the blocking probability of path p, Hp is the number of hops of path p (this gives preference to shortest paths), Lp is a measure of the load on path p (discussed below), and Fp is either 1 or 0 depending on whether the path p is feasible or not; a path p is said to be feasible if the source \expects" a successful setup on p.14 Consider, for example, a service with arrival rate of setup requests . Assume the set of candidate paths contains m paths (i.e., the service has m classes) with corresponding selection probabilities 1; 2;    ; m , computed according to (10). Then the arrival rates of setup requests to each path (or class) are  1;  2;    ;  m, respectively.  How is Lp de ned ? Lp could be:

- the sum of the utilizations of the links on path p, where the utilization of a link is the fraction of the link capacity reserved, or - the maximum link utilization of the links on path p, or - the sum of the delays of the links on path p, where the delay of a link can be estimated as C ?1CR where C is the link capacity, and CR is the average reserved link capacity. (Note that this delay estimation uses the M=M=1 delay formula.)

By next-to minimum hop paths, we mean (minimum hop + 1) paths if at least one such path exists. If not, then we mean (minimum hop + 2) paths if at least one such path exists. And so on. 12 Contrary to routing schemes designed for circuit-switched networks [15] and recently proposed for ATM networks [18, 21, 19, 20], we do not consider one-hop and two-hop paths only. 13 We use here all possible factors (we can think of) upon which Wp may depend. Some of these factors were considered in previous works (e.g., [4, 1]) when selecting routes for connections. 14 The source would take into account the requirements of the new connection in addition to the current load on the path to test the feasibility of the path. This is in fact an admission control function. 11

12

 Do the path selection criteria depend on trac type ? For a particular trac type with very

stringent QoS requirements, we could restrict the set of candidate paths to only minimum hop paths, while for other trac types the set could also include next-to minimum hop paths.

In Section 7, we use our model to compare di erent route selection policies assuming the link scheduling algorithm and admission control described in Section 3.

5 Numerical Results for Single Link We consider the case of a single link used by multiple service classes. We consider 10 classes whose parameters are shown in Table 1. We assume every connection in a class requires a xed amount of bandwidth. We obtain instantaneous performance measures through equations (6) and (7). We examined the use of the two analytical models discussed in Sections 2 and 3 to compute the instantaneous blocking probabilities, namely:  The multi-dimension birth-death model where the link state is denoted by the number of connections in each class using the link.  The one-dimension stochastic Knapsack model where the link state is denoted by the total amount of capacity used. Consistent with [35], both approaches yield the same results. In general, we found that the Knapsack approach is much less time-consuming than the multi-dimension approach. This is mainly because the latter requires nding the 10-dimension states that belong to the link's schedulability region, and also the blocking states for each class. We compare the results obtained using our approach with those obtained using discrete-event simulation. Figures 6, 7 and 8 show the time behavior of the total number of connections currently setup on the link, the link utilization, and the total throughput, respectively, for the parameters given in Table 1. In our approach, we take the discrete-time step  to be 0.1. In the discrete-event simulation, the performance measures are periodically computed at t = 1; 2;   as follows. The total number of connections currently setup on the link and the link utilization at time instant t are simply the values of these measures as observed at t. The total throughput at time instant t is de ned to be the total number of connections admitted on the link in the interval [t ? 1; t). Our approach yields results very close to the exact values. In addition, we found our approach much less time-consuming than simulation. This is especially because the latter requires the averaging of a large number of independent simulation runs. To give an idea of the computational savings, for this experiment, on a DECstation 5000/133, our approach required around 6 seconds of execution time while the 50-run simulation required around 25 seconds. 13

Class Required bandwidth Arrival rate Average life time 1 30 0.125 5 2 15 0.5 1 3 50 0.2 2 4 10 0.1 2 5 40 0.125 1 6 25 0.5 0.5 7 30 1.0 0.5 8 10 0.0625 10 9 5 1.0 0.2 10 50 0.25 2 Table 1: Parameters of 10 classes using a link with total bandwidth of 200.

6 Numerical Results for a Tandem Network Here, we validate our link independence assumption manifested in equation (6) by the product term. We consider a simple 3-node tandem network depicted in Figure 3. Each link has a total bandwidth of 200. We assume that the 10 service classes, whose parameters are listed in Table 1, arrive at node 1. Node 3 is their destination node. 1

2

3

Figure 3: A 3-node tandem network. Figure 9 shows the instantaneous network throughput obtained using our approach and using discrete-event simulation. Although, in this situation, the link dependency is strong, our approach only slightly underestimates the throughput. We expect more accurate results for general networks where the link independence assumption is reasonable.

7 Numerical Results for NSFNET In this section, we use our model to compare three route selection algorithms. We assume the use of the \per-connection" link scheduling algorithm and admission control described in Section 3. Under the assumption of adequate bu er space, we use the Knapsack equation (9) to compute the instantaneous class blocking probabilities. The required bandwidth of a connection is computed using equation (8); if the computed value is not integer, it is rounded to the smallest integer greater than this value. 14

We consider the performance of the routing algorithms on the topology of the NSFNET backbone shown in Figure 4. All links have capacities of 600. For this experiment, the discrete-time step  equals 0.1. The routing update period equals 5. We consider 39 services using the NSFNET backbone, with parameters as shown in Table 2. Services with the same trac and end-to-end QoS parameters, but with di erent source/destination pairs, are grouped in the same row. 10 9

8

0 7 6

3 1

13

5 4 12

2

11

Figure 4: NSFNET backbone: 14 nodes, 21 bidirectional links, average degree 3. (source, dest) pairs (R; m; b; D; ) (; ) No of services (0, 13),(1, 13),(2, 13),(3, 13),(4, 13),(5, 13) (30, 20, 0.4, 0.01, 10?6) (2, 1) 6 ? 6 (0, 13),(1, 13),(2, 13),(3, 13),(4, 13),(5, 13) (30, 20, 0.4, 0.01, 10 ) (2, 1) 6 ? 6 (6, 13) (30, 10, 0.4, 0.01, 10 ) (2, 2) 1 ? 6 (6, 13) (30, 10, 0.4, 0.01, 10 ) (2, 2) 1 ? 6 (7, 13),(8, 13),(9, 13),(10, 13),(11, 13) (30, 10, 0.4, 0.01, 10 ) (1.8, 2) 5 (7, 13),(8, 13),(9, 13),(10, 13),(11, 13) (30, 10, 0.4, 0.01, 10?6) (1.8, 2) 5 ? 6 (12, 13) (60, 20, 0.4, 0.01, 10 ) (0.3, 0.2) 1 ? 6 (12, 13) (60, 20, 0.4, 0.01, 10 ) (0.3, 0.2) 1 ? 6 (0, 1) (30, 20, 0.4, 0.01, 10 ) (2, 1) 1 ? 6 (2, 1),(3, 1),(4, 1),(5, 1),(6, 1) (30, 20, 0.4, 0.01, 10 ) (2, 1) 5 (7, 1),(8, 1),(9, 1),(10, 1),(11, 1),(12, 1),(13, 1) (30, 10, 0.4, 0.01, 10?6) (1.8, 2) 7 Table 2: Parameters of the 39 services using the NSFNET backbone.

Total number of services = 39

We assume a source node considers only the set of minimum-hop and minimum-hop+1 paths for connection routing. A path from the set is selected probabilistically according to path weights as explained in Section 4. The rst selection algorithm, referred to as SEL.HOP, de nes the path weight as 1=Hp. The second selection algorithm, referred to as SEL.UTIL, de nes the path weight as (1 ? Up ), where Up is the maximum link utilization of the links on path p. The third selection 15

algorithm, referred to as SEL.UTIL HOP, de nes the path weight as (1 ? Up )=Hp. Figure 10 shows the instantaneous network throughput for the three routing algorithms. Figure 11 shows their instantaneous percentage of connections blocked. We observe that SEL.UTIL HOP performs the best, closely followed by SEL.UTIL, and then by SEL.HOP, which is much worse. Clearly, for this network con guration, choosing paths which are both under-utilized and short for routing new incoming connections is the best strategy. We note that this is consistent with results in [4] where a route selection algorithm similar to SEL.UTIL HOP was shown to outperform other algorithms on a 5-node connection-oriented reservationless network using discrete-event simulations.

8 Conclusions and Future Work In this paper, we presented an iterative discrete-time formulation and numerical solution procedure that gives approximate, yet suciently accurate, performance measures for reservation-oriented networks. Our results indicate the computational advantages of our approach over discrete-event simulation. We have also applied our approach to compare di erent routing algorithms. There are several areas for future work in using our approach to investigate various routing, scheduling, and admission control schemes on large networks. One area is to examine routing schemes that distinguish between di erent types of trac (e.g., low throughput voice and high throughput video), computing a di erent set of routes for each type. We will examine the capability of the routing scheme to distribute connections of each type in a way that increases the network throughput, and also the responsiveness of the routing scheme to failures and repairs. Another area is to examine admission controls that block some connection setup requests even if their admission is feasible possibly in order to reduce the chance of future blocking of connections of other types. In this case, blocking would occur at more states of the link schedulability region [37, 22]. Another area is to investigate policies for dividing the end-to-end QoS requirement among the links of a route other than the equal allocation policy we considered in this paper. These policies would take into account the current link loads as measured in the last routing update period. Other QoS requirements such as packet loss will be considered.

References

[1] H. Ahmadi, J. Chen, and R. Guerin. Dynamic Routing and Call Control in High-Speed Integrated Networks. In Proc. Workshop on Systems Engineering and Trac Engineering, ITC'13, pages 19{26, Copenhagen, Denmark, June 1991. [2] D. Anick, D. Mitra, and M. Sondhi. Stochastic Theory of a Data Handling System with Multiple Sources. Bell Syst. Tech. J., 61:1871{1894, 1982. [3] G. Ash, J. Chen, A. Frey, and B. Huang. Real-time Network Routing in a Dynamic Class-of-Service Network. In Proc. 13th ITC, Copenhagen, Denmark, 1991.

16

[4] S. Bahk and M. El Zarki. Dynamic Multi-path Routing and How it Compares with other Dynamic Routing Algorithms for High Speed Wide Area Networks. In Proc. SIGCOMM '92, pages 53{64, Baltimore, Maryland, August 1992. [5] S-P. Chung, A. Kashper, and K. Ross. Computing Approximate Blocking Probabilities for Large Loss Networks with State-Dependent Routing. IEEE/ACM Transactions on Networking, 1(1):105{115, February 1993. [6] S-P. Chung and K. Ross. Reduced Load Approximations for Multirate Loss Networks. IEEE Transactions on Communications, 41(8):1222{1231, August 1993. [7] J. Cobb and M. Gouda. Flow Theory: Veri cation of Rate-Reservation Protocols. In Proc. IEEE International Conference on Network Protocols '93, San Francisco, California, October 1993. [8] D. Comer and R. Yavatkar. FLOWS: Performance Guarantees in Best E ort Delivery Systems. In Proc. IEEE INFOCOM, Ottawa, Canada, pages 100{109, April 1989. [9] A. Elwalid and D. Mitra. E ective Bandwidth of General Markovian Trac Sources and Admission Control of High-Speed Networks. IEEE/ACM Transactions on Networking, 1(3):329{343, June 1993. [10] D. Ferrari. Real{Time Communication in Packet{Switching Wide{Area Networks. Technical Report TR-89-022, International Computer Science Institute, Berkeley, California, May 1989. [11] D. Ferrari. Client Requirements for Real{Time Communication Services. IEEE Communications Magazine, 28(11), November 1990. [12] J. Filipiak. Modeling and Control of Dynamic Flows in Communication Networks. New York: SpringerVerlag, 1988. [13] F. Le Gall and J. Bernussou. An Analytical Formulationfor Grade of Service Determination in Telephone Networks. IEEE Transactions on Communications, COM-31(3):420{424, March 1983. [14] M.R. Garzia and C.M. Lockhart. Nonhierarchical Communications Networks: An Application of Compartmental Modeling. IEEE Transactions on Communications, 37:555{564, June 1989. [15] A. Girard. Routing and Dimensioning in Circuit-Switched Networks. Addison-Wesley Publishing Company, 1990. [16] A. Girard and M. Bell. Blocking Evaluation for Networks with Residual Capacity Adaptive Routing. IEEE Transactions on Communications, COM-37:1372{1380, 1989. [17] R. Guerin, H. Ahmadi, and M. Naghshineh. Equivalent Capacity and its Application to Bandwidth Allocation in High-Speed Networks. IEEE J. Select. Areas Commun., SAC-9(7):968{981, September 1991. [18] S. Gupta. Performance Modeling and Management of High-Speed Networks. PhD thesis, University of Pennsylvania, Department of Systems, 1993. [19] S. Gupta, K. Ross, and M. ElZarki. Routing in Virtual Path Based ATM Networks. In Proc. GLOBECOM '92, pages 571{575, 1992. [20] S. Gupta, K. Ross, and M. ElZarki. On Routing in ATM Networks. In Proc. IFIP TC6 Task Group/WG6.4 Workshop on Modeling and Performance Evaluation of ATM Technology. H. Perros, G. Pujolle, and Y. Takahashi (Editors). Elsevier Science Publishers B.V., Amsterdam, The Netherlands, 1993. [21] R.-H. Hwang. Routing in High-Speed Networks. PhD thesis, University of Massachusetts, Department of Computer Science, May 1993. [22] J. Hyman, A. Lazar, and G. Paci ci. A Separation Principle Between Scheduling and Admission Control for Broadband Switching. IEEE J. Select. Areas Commun., SAC-11(4):605{616, May 1993. [23] G. Kesidis, J. Walrand, and C.-S. Chang. E ective Bandwidths for Multiclass Markov Fluids and Other ATM Sources. IEEE/ACM Transactions on Networking, 1(4):424{428, August 1993. [24] D. Kincaid and W. Cheney. Numerical Analysis: Mathematics of Scienti c Computing. Brooks/Cole Publishing Company, 1991. [25] L. Kleinrock. Communication Nets: Stochastic Message Flow and Delay. New York: McGraw Hill, 1964.

17

[26] L. Kleinrock. Queueing Systems, volume I and II. New York: Wiley, 1976. [27] W-C Lau and S-Q Li. Trac Analysis in Large-Scale High-Speed Integrated Networks: Validation of Nodal Decomposition Approach. In Proc. IEEE INFOCOM '93, San Francisco, California, 1993. [28] I. Matta and A.U. Shankar. On the Interaction between Gateway Scheduling and Routing. In Proc. International Workshop on Modeling, Analysis and Simulation of Computer and Telecommunications Systems - MASCOTS '94, Durham, North Carolina, January 1994. Available by anonymous ftp at ftp.cs.umd.edu:pub/MaRS. [29] I. Matta and A.U. Shankar. Type-of-Service Routing in Dynamic Datagram Networks. In Proc. IEEE INFOCOM '94, Toronto, Ontario, Canada, June 1994. To appear. Available by anonymous ftp at ftp.cs.umd.edu:pub/MaRS. [30] D. Mitra, R. Gibbens, and B. Huang. Analysis and Optimal Design of Aggregated-Least-BusyAlternative Routing on Symmetric Loss Networks with Trunk Reservation. In Proc. 13th ITC, Copenhagen, Denmark, 1991. [31] D. Mitra and J. Seery. Comparative Evaluations of Randomized and Dynamic Routing Strategies for Circuit-Switched Networks. IEEE Transactions on Communications, 39(1):102{116, January 1991. [32] R. Nagarajan, J. Kurose, and D. Towsley. Local Allocation of End-to-End Quality-of-Service in HighSpeed Networks. In Proc. IFIP TC6 Workshop on Modelling and Performance Evaluation of ATM Technology, page 2.2, January 1993. [33] Y. Ohba, M. Murata, and H. Miyahara. Analysis of Interdeparture Processes for Bursty Trac in ATM Networks. IEEE J. Select. Areas Commun., 9(3):468{476, April 1991. [34] A. Parekh. A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks. Technical Report LIDS-TR-2089, Laboratory for Information and Decision Systems, Massachusetts Institute of Technology, 1992. [35] J. Roberts. A Service System with Heterogeneous User Requirements. In Performance of Data Communications Systems and Their Applications, pages 423{431. G. Pujolle (Editor). North-Holland Publishing Company, 1981. [36] D.C. Verma, H. Zhang, and D. Ferrari. Delay Jitter Control for Real{Time Communication in a Packet{ Switching Network. In Proc. IEEE TRICOMM, Chapel Hill, North Carolina, pages 35{43, April 1991. [37] J. Wieselthier, C. Barnhart, and A. Ephremides. Optimal Admission Control in Circuit-Switched Multihop Radio Networks. In Proc. 31st Conference on Decision and Control, Tucson, Arizona, December 1992.

18

Algorithm 1: Proposed evaluation method. begin 1.

Given the network topology and services, determine for each link l the set of classes using it along with their local QoS requirements.

2.

For each link l, determine its schedulability region S l

3.

Initialize xl

(* as will be described later in Section 3 *).

4. 5.

 (0) for each link l (* xl (0) = 0 for empty initial Initialize zcl (0) arbitrarily for every link l and c 2 ?l . For n = 0; 1; 2;    begin 6. 7. 8.

network *).

 ( ) for all l and possibly other information, compute the routing pattern R(n) as defined by the routing algorithm (* more details in Section From R(n), determine arrival rates of setup requests to every link l. For i = 0; 1;    ; 1 ? 1

Using xl n

begin 9.

 ( + (i + 1) )

For each link l, compute xl n

using equations (6) and (7).

end end end

Algorithm 2: Details of the instantaneous blocking probabilities computation. begin 1.

For each link l begin

( + i )

2.

Solve the multi-dimension Markov chain on S l using zcl n to obtain Bcl n i .

3.

Compute for every c

4.

If new zcl n begin

~( + )

( + i )

5. 6.

2 ?l ,

( + i )

and zcl n

( + i )

( + i )

new zcl n

?

not close for all c

( + i ),

zcl n

xlc (n+i ) ~ l (n+i ) . 1 B c

new zcl n Go to step 2.

for every c

2 ?l

2 ?l .

end end 7.

 ( + (i + 1) )

For each link l, compute xl n

using equation (6).

end

Figure 5: Proposed evaluation method. 19

4 *).

for all c

2 ?l

NO OF CONNECTIONS vs Time 4

No of connections

3.5

3

2.5

2

Simulation (50 runs) Simulation (1000 runs) Our approach

1.5 5

10

15

20

25 30 Time

35

40

45

50

Figure 6: Number of connections versus time. One link, 10 classes.

UTILIZATION vs Time 0.6 Simulation (50 runs) Simulation (1000 runs) Our approach

0.55

Utilization

0.5 0.45 0.4 0.35 0.3 0.25 0.2 5

10

15

20

25 30 Time

35

40

45

Figure 7: Utilization versus time. One link, 10 classes. 20

50

THROUGHPUT vs Time Simulation (50 runs) Simulation (1000 runs) Our approach

4.5

Throughput

4

3.5

3

2.5 5

10

15

20

25 30 Time

35

40

45

50

Figure 8: Throughput versus time. One link, 10 classes.

THROUGHPUT vs Time Simulation (1000 runs) Our approach

4.5

Throughput

4

3.5

3

2.5 5

10

15

20

25 30 Time

35

40

45

50

Figure 9: Throughput versus time for the 3-node tandem network. 21

THROUGHPUT vs Time 63 SEL.HOP SEL.UTIL SEL.UTIL_HOP

62.8 62.6

Throughput

62.4 62.2 62 61.8 61.6 61.4 61.2 61 5

10

15

20

25 30 Time

35

40

45

50

Figure 10: Throughput versus time for the NSFNET backbone.

PERCENT BLOCKING vs Time 12 SEL.HOP SEL.UTIL SEL.UTIL_HOP

11.8 11.6 Percent blocking

11.4 11.2 11 10.8 10.6 10.4 10.2 10 5

10

15

20

25 30 Time

35

40

45

50

Figure 11: Percent blocking versus time for the NSFNET backbone. 22