Application of artificial intelligence to improve quality of service in ...

9 downloads 22562 Views 657KB Size Report
High IR call preemption rates cause interruptions to service continuity, which is ... Neural networksComputer networksQuality of serviceCall preemption.
Neural Comput & Applic (2012) 21:81–90 DOI 10.1007/s00521-011-0622-6

ORIGINAL ARTICLE

Application of artificial intelligence to improve quality of service in computer networks Iftekhar Ahmad • Joarder Kamruzzaman Daryoush Habibi



Received: 7 July 2009 / Accepted: 3 May 2011 / Published online: 22 May 2011  Springer-Verlag London Limited 2011

Abstract Resource sharing between book-ahead (BA) and instantaneous request (IR) reservation often results in high preemption rates for ongoing IR calls in computer networks. High IR call preemption rates cause interruptions to service continuity, which is considered detrimental in a QoS-enabled network. A number of call admission control models have been proposed in the literature to reduce preemption rates for ongoing IR calls. Many of these models use a tuning parameter to achieve certain level of preemption rate. This paper presents an artificial neural network (ANN) model to dynamically control the preemption rate of ongoing calls in a QoS-enabled network. The model maps network traffic parameters and desired operating preemption rate by network operator providing the best for the network under consideration into appropriate tuning parameter. Once trained, this model can be used to automatically estimate the tuning parameter value necessary to achieve the desired operating preemption rates. Simulation results show that the preemption rate attained by the model closely matches with the target rate. Keywords Neural networks  Computer networks  Quality of service  Call preemption

I. Ahmad (&)  D. Habibi School of Engineering, Edith Cowan University, Joondalup, WA, Australia e-mail: [email protected] J. Kamruzzaman School of Computing and Information Technology, Monash University, Melbourne, VIC, Australia

1 Introduction For the past decades, quality of service (QoS) provisioning has been a major research topic mainly because of increasing demand of multimedia and distributed applications. Resource reservation is one of the widely practiced techniques that are used to ensure guaranteed QoS of applications. Two types of reservation techniques have been proposed by researchers: (i) book-ahead (BA) reservation (ii) instantaneous request (IR) reservation. Multimedia and distributed applications that require long duration, high bandwidth demand, and have time-sensitive significance are good candidates for book-ahead reservation [1–5]. In BA reservation, resource is reserved well in advance from the announced starting time over the declared duration to ensure that the application will not experience scarcity of resource at the point of its activation. Contrarily, an IR call connection requests for immediate reservation and usage of resources. Resource sharing between BA and IR requests imposes a number of challenges. One of them is to keep the preemption rate of ongoing IR calls very low as preemption is considered as an interruption to service continuity and is perceived as a serious issue from users’ perceived QoS point of view [6, 7]. In this paper, a novel application of ANN is shown to maintain service continuity of ongoing IR calls in a QoS-enabled network with provision for BA reservation. Artificial neural network (ANN) has been a useful tool to solve a number of complex problems in relation to quality of service (QoS) provisioning in communication research. Neural networks have been successfully used in solving problems like routing, resource allocation, call admission control, traffic pattern estimation, packet loss estimation, and network parameter control. Chou and Wu [8] proposed a neural network–based model that adaptively

123

82

adjusts network parameters like threshold, push-out probability, and incremental bandwidth size of virtual path to maintain guaranteed QoS in ATM networks. Kumar et al. [9] demonstrated that reduction in assigned resources needed to maintain guaranteed QoS under network overload conditions could be successfully done in real time with the assistance of neural networks. Rovithakis et al. [10] proposed a method that uses neural networks as a controller to map QoS parameters at the application level for multimedia services into appropriate values for the media characteristics in order to achieve the required user satisfaction without violating the available bandwidth constraints. Tong et al. [11] investigated the application of a multilayer perceptron (MLP) network and a radial basis function (RBF) network to estimate packet loss rate, which is considered as an important QoS parameter in a computer network. Application of Hopfield neural network (HNN) for dynamic channel allocation (DCA) in cellular radio network was shown by Lazaro et al. [12]. They showed that HNN when used in DCA improved call blocking as well as dropping rate. Cheng et al. [13] used ANN to map the domain knowledge on traffic control onto the parameters of a fuzzy logic system. It was demonstrated that ANN in conjunction with fuzzy logic system performed better than other call admission controllers in a high-speed multimedia network. MLP network with the Levenberg–Marquardt training algorithm was used to estimate equivalent bandwidth in demand of QoS by Benjapolakul et al. [14]. Leng et al. [15] used neural networks for call admission control (CAC) and demonstrated that ANN when used for CAC purpose produced better results than other schemes like peak bandwidth allocation scheme and cell loss ratio scheme. Lee et al. [16] proposed a scheme where cell loss rate was predicted more accurately using neural networks in real-time networks. The predicted value was then used in source rate control to maintain QoS in an ATM network. In this work, we use an ANN model for estimation of the value of a tuning parameter for a previously proposed call admission control scheme to achieve a target level of operating IR call preemption rate. A number of researchers proposed different models to keep the IR call preemption rate low in a QoS-enabled network where resources are shared between IR and BA calls. Greenberg et al. [1] proposed an approximate interrupt probability–based admission control scheme that showed that resource sharing between IR and BA calls achieves better network performance than strict partitioning of resources proposed in [17, 18]. Schelen and Pink [2] proposed a model to reduce the number of preemption by introducing the concept of look-ahead time. Look-ahead time is defined as the preallocation time, i.e., the time for starting to set aside resources for advance reservations so that there is no resource scarcity at the stating time of a BA

123

Neural Comput & Applic (2012) 21:81–90

call. Ahmad et al. [3] proposed a dynamic look-ahead time (DLAT)-based call admission control model that considered network dynamicity and attained better performance than constant look-ahead time (CLAT)-based CAC model. Lin et al. [4] proposed an application aware look-ahead time–based admission control scheme that considered different look-ahead time for different applications. Preemption rate is a key parameter in a QoS-enabled network. High preemption rate causes service degradation on users’ side. From network perspective, high preemption rate causes extra overhead for re-routing, heavy signaling messages, and congestion of traffic, while very low preemption rate causes low utilization of network capacity. Desired operating preemption rate is not often a fixed value for all operating conditions in a particular network. Operating preemption rate is governed by the interests of both users and network provider and can be best addressed as an optimization problem. If most of the ongoing applications are highly sensitive to service interruption, preemption rate should be maintained very low and vice versa. As a result, operating preemption rate is not a fixed value and should be adaptive with changing network conditions. CLAT model uses look-ahead time as the tuning parameter to achieve different values of preemption rate, while DLAT model uses a tuning parameter ‘c’ to achieve operating preemption rate. The model proposed in [4] is unable to adjust itself to achieve different values of preemption rate as it uses no tuning parameter. A model would be extremely useful to map the desired operating preemption rate and the current network traffic parameters into the tuning parameter, which when fitted into CAC models attains the desired preemption rate in a real-time network. The relationship between the tuning parameter, preemption rate, and other network traffic parameters in DLAT model is very complex to determine using nonlinear regression approach. Nonlinear regression approach is thus restricted to a limited number of network parameters. This makes neural network a suitable candidate for this problem because of its capability of realizing any complex input– output relationship to an arbitrary degree of accuracy. This paper proposes an artificial neural network model to map the desired operating preemption rate and traffic parameters to the tuning parameter of DLAT-based CAC model. Simulation results show that the proposed model successfully achieves the target preemption rate with different network conditions. The rest of the paper is organized as follows. In Sect. 2, we present DLAT model of call admission control. In this paper, DLAT model is used as the standard CAC scheme because it gives the highest level of performance. In Sect. 3, we present the proposed ANN model. Simulation results are provided in Sect. 4. Section 5 presents concluding remarks and direction for further researches.

Neural Comput & Applic (2012) 21:81–90

83

2 Dynamic look-ahead time (DLAT)-based model A BA call request is characterized by two additional parameters, starting time and call holding time along with other QoS parameters. If the request is granted, the application is guaranteed to have required resources at the time of its start. In contrast, starting time for an IR call is immediate and call holding time is open ended. Problem occurs when a BA call becomes active at certain point of lifetime of an ongoing IR call and there does not exist enough available resources to support the BA call (Fig. 1). Ongoing IR calls are required to be preempted to meet BA bandwidth demand. IR call preemption rate that indicates the ratio of preempted IR calls to accepted IR calls should be reasonably small in a QoS-enabled network. It is the responsibility of CAC model to keep the preemption rate low. Dynamic look-ahead time (DLAT)-based CAC model calculates look-ahead time taking the dynamicity of traffic pattern and network state into consideration. It considers the current IR load, future BA load, IR call release rate, variation in load, and arrival rate to calculate the lookahead time. It dynamically updates the value of look-ahead time at regular time intervals. Look-ahead time is calculated by the following equation [3]: LATðt; sÞ ¼

maxðAðsÞ þ RðtÞ þ ð1 þ lÞrðsIR Þ  C; 0Þ   ð1  bÞsIR kIR 1 þr c kIR

ð1Þ

Here, LAT(t, s) is the look-ahead time w.r.t traffic condition at current time t, and BA activation time s (t \ s). A(s) is the aggregate bandwidth reserved for BA calls to be activated at time s, R(t) is the aggregate bandwidth used by IR calls at time t, sIR is the mean bandwidth demand of IR calls, kIR is the mean arrival rate of IR calls, l is the normalized BA limit that determines maximum allowable aggregate BA load, b is the call blocking probability for IR calls, r(.) is the standard deviation, and c ([1) is the tuning parameter. The value of

Preemption of IR call

IR Calls

Preemption of IR call

BA Call 3 BA Call 2 BA Call 1

Fig. 1 Preemption of IR calls in BA reservation

Link Capacity

‘c’ affects look-ahead time that, in turn, varies preemption rate. Look-ahead time (LAT) is calculated at regular intervals of operating time, and IR calls are checked against the following rule at call admission time. C [ max ðr þ R þ AðsÞÞ s2LAT

ð2Þ

Look-ahead time is computed at each interval for a number of entries in the book-ahead table. Those entries are taken into calculation for which the following rule satisfies: Aðsi Þ  AðtÞ t [ si  ð3Þ ð1  bÞsIR kIR The right-hand side is conservatively computed on the worst case assumption that the network is completely utilized. In summary, the algorithm is given as follows: Step 1: At a time t of a time interval, for all entries si in book-ahead table that satisfy (3) follow step 2. Step 2: Calculate look-ahead time LATðt; si Þ. Step 3: Find the LATðt; si Þ for which t [ si  LATðt; si Þ and t  ðsi  LATðt; si ÞÞ are minimum. Step 4: If no such LATðt; si Þ is found, then LAT is set to zero. Go to step 6. Step 5: Set LAT equal to LATðt; si Þ. Step 6: Quit the algorithm and go to step 1 when the next interval is due. As mentioned earlier, DLAT model can successfully achieve low preemption rate in QoS-enabled network. However, the desired operating level of preemption rate is subject to change depending on users’ demand and network management. For example, at a certain operating period if 80% of the total calls in the network are sensitive to service interruption, it is important for the network enterprise to maintain the preemption rate less than 0.2 to ensure that QoS guarantee is properly maintained. Otherwise, it will result in user dissatisfaction due to excessive interruption of guaranteed service. Similarly, if the number of QoS-sensitive applications is small at a certain operating period, the network can operate at a reasonably high preemption rate. To assess and reflect the change in network condition, interval-based monitoring is often used in a network that effectively determines the up-to-date traffic conditions and the desired operating preemption rate for the next operating interval. To achieve the desired but changing preemption rate in a real-time network by using a model like DLAT, it is important to devise an intelligent model that will automatically respond to the changing network condition and accordingly set the tuning parameter value so that the desired level of operating preemption rate is duly maintained. This work uses such an intelligent model using neural network to address this issue in the following section.

123

84

Neural Comput & Applic (2012) 21:81–90

estimate the appropriate value of ‘c’ in response to the network state.

3 ANN-based model of look-ahead time 3.1 The proposed model

3.2 Learning algorithms A network is characterized by network parameters like mean bandwidth demand of BA calls sBA , mean bandwidth demand of IR calls sIR , mean arrival rate of BA calls kBA , mean arrival rate of IR calls kIR , mean call holding time of BA calls TBA , mean call holding time of IR calls TIR , BA limit l, and the IR call preemption rate q. The nature of distribution for bandwidth demand, arrival rate, and call holding time can be found from proper periodic traffic analysis. Preemption rate is a complex nonlinear function of the network parameters. The aim of this work is to find the value of tuning parameter ‘c’ that when fitted into (1) provides the desired operating preemption rate. The variable network parameters, as shown in Fig. 2, are the inputs to the ANN and produces tuning parameter ‘c’ as the output. Parameters l, sIR , kIR , sBA , kBA , TIR , TBA , l, and ‘c’ are the inputs to the DLAT model that again provides lookahead time LAT as the output. New calls are checked against the LAT according to (2). The ANN is trained with a wide range of network parameters for different network conditions. The training dataset is created by simulating the DLAT model as follows: for a particular set of sIR , sBA , kIR , kBA , TIR , TBA , l, and tuning parameter ‘c’, the simulation is done for 2.5 9 106 s calculating look-ahead time at 10-s interval. At the end of each simulation, the number of calls preempted is calculated and preemption rate is determined. To cover a large spectrum of network-operating points, the parameters were varied over a wide range. For each combination of parameters, the value of ‘c’ was varied from 1 to 21 (c = 21 provides acceptably low preemption rate for different traffic parameters in our simulation) yielding different preemption rate. This procedure generates a data set where each set of traffic parameters and preemption rate is associated with a ‘c’ value, necessary to train an ANN model. Once the ANN model is trained with sufficient amount of training data, the network is expected to

Neural networks are a class of nonlinear model that can approximate any nonlinear function to an arbitrary degree of accuracy and have been used to realize complex input– output mapping in different domains. The most commonly used neural network architecture is multilayer feedforward network. It consists of an input layer, an output layer, and one or more intermediate layer called hidden layer. All the nodes at each layer are connected to each node at the upper layer by interconnection strength called weights. A training algorithm is used to attain a set of weights that minimizes the difference between the target and actual output produced by the network. For supervised learning, backpropagation [19] is the most commonly used algorithm to train multilayer feedforward network. In this study, we experimented with two improved variants of backpropagation algorithm: scaled conjugate gradient (SCG) and backpropagation with Bayesian regularization (BR). 3.2.1 Scaled conjugate gradient (SCG) In multilayer feedforward training, weights are updated iteratively to map a set of input vectors (x1, x2, …, xp) to a set of corresponding output vectors (y1, y2, …, yp). The input xp is presented to the network and multiplied by the weights. All the weighted inputs to each unit of upper layer are summed up and produce output governed by the following equations.   yp ¼ f Wo hp þ ho ; ð4Þ   hp ¼ f Wh xp þ hh ; ð5Þ where Wo and Wh are the output and hidden layer weight matrices, hp is the vector denoting the response of hidden layer for pattern ‘p’, ho and hh are the output and hidden layer bias vectors, respectively, and f(.) is the sigmoid

BA limit, l Mean bandwidth demand of BA calls, τBA Mean bandwidth demand of IR calls, τIR

CAC

Mean arrival rate of BA calls, λ BA Mean arrival rate of IR calls, λ IR

MODEL

Mean call holding time of BA calls, TBA

(DLAT)

Mean call holding time of IR calls, TIR IR call preemption rate, ρ

Fig. 2 Proposed ANN model

123

ANN MODEL

c

LAT

Neural Comput & Applic (2012) 21:81–90

85

activation function. The cost function to be minimized is the sum of squared error defined as 1X E¼ ðtp  yp ÞT ðtp  yp Þ ð6Þ 2 p where tp is the target output vector for pattern ‘p’. The algorithm uses gradient descent technique to adjust the connection weights between neurons. Denoting the fan-in weights to a single neuron by a weight vector w, its update in the tth epoch is governed by the following equations. Dwt ¼ grEðwÞjw¼wðtÞ þ aDwt1

ð7Þ

The parameters g and a are the learning rate and the momentum factor, respectively. The error surface in such training usually contains long ravines with sharp curvature and gently sloping floor, which cause slow convergence. In conjugate gradient methods, a search is performed along conjugate directions, which produces generally faster convergence than steepest descent directions [20]. In steepest descent search, a new direction is perpendicular to the old direction. This approach to the minimum is a zigzag path, and one step can be mostly undone by the next. In conjugate gradient methods, a new search direction spoils as little as possible the minimization achieved by the previous direction and the step size is adjusted in each iteration. The general procedure to determine the new search direction is to combine the new steepest descent direction with the previous search direction so that the current and previous search directions are conjugate. Conjugate gradient techniques are based on the assumption that, for a general nonquadratic error function, error in the neighborhood of a given point is locally quadratic. The weight changes in successive steps are given by the following equations. wtþ1 ¼ wt þ at dt

ð8Þ

dt ¼ gt þ bt dt1

ð9Þ

with gt  rEðwÞjw¼wt bt ¼

gTt gt T gt1 gt1

or bt ¼

ð10Þ DgTt1 gt gTt1 dt1

or bt

DgTt1 gt gTt1 gt1 ð11Þ

where dt and dt-1 are the conjugate directions in successive iterations. The step size is governed by the coefficient at, and the search direction is determined by bt. In scaled conjugate gradient, the step size at is calculated by the following equations. at ¼ 

dTt gt dt

dt ¼ dTt Ht dt þ kt kdt k2

ð12Þ ð13Þ

where kt is the scaling co-efficient, and Ht is the Hessian matrix at iteration t. k is introduced because, in case of nonquadratic error function, the Hessian matrix need not be positive definite. In this case, without k, d may become negative and weight update may lead to an increase in error function. With sufficiently large k, the modified Hessian is guaranteed to be positive (d [ 0). However, for large values of k, step size will be small. If the error function is not quadratic or d \ 0, k can be increased to make d [ 0. In case of d \ 0, Moller [21] suggested the appropriate scale coefficient kt to be ! dt kt ¼ 2 kt  ð14Þ kdt k2 Rescaled value dt of dt is then expressed as   dt ¼ dt þ kt  kt kdt k2

ð15Þ

The scaled coefficient also needs adjustment to validate the local quadratic approximation. The measure of quadratic approximation accuracy, Dt, is expressed by Dt ¼

2fEðwt Þ  Eðwt þ at dt Þg at dTt gt

ð16Þ

If Dt is close to 1, then the approximation is a good one and the value of kt can be decreased [22]. On the contrary, if Dt is small, the value of kt has to be increased. Some prescribed values suggested in [21] are as follows: For Dt [ 0.75, kt?1 = kt/2 For Dt \ 0.25, kt?1 = 4kt Otherwise, kt?1 = kt 3.2.2 Bayesian regularization (BR) A desired neural network model should produce small error not only on sample data but also on out of sample data. To produce a network with better generalization ability, MacKay [23] proposed a method to constrain the size of network parameters by regularization. Regularization technique forces the network to settle to a set of weights and biases having smaller values. This causes the network response to be smoother and less likely to overfit [20] and capture noise. In regularization technique, the cost function F is defined as F ¼ cED þ ð1  cÞEW

ð17Þ

where ED is the same as E defined in (3), Ew ¼ kwk2 =2 is the sum of squares of the network parameters, and c (\1.0) is the performance ratio parameter, the magnitude of which dictates the emphasis of the training on regularization. A large c will drive the error ED to small value, whereas a small c will emphasize parameter size reduction at the

123

86

Neural Comput & Applic (2012) 21:81–90

expense of error and yield smoother network response. One approach of determining optimum regularization parameter automatically is the Bayesian framework [23]. It considers a probability distribution over the weight space, representing the relative degrees of belief in different values for the weights. The weight space is initially assigned some prior distribution. Let D = {xm, tm} be the data set of the inputtarget pair, m being a label running over the pair, and M be a particular NN model. After the data are taken, the posterior probability distribution for the weight p(w|D, c, M) is given according to the Bayesian rule. pðwjD; c; M Þ ¼

pðDjw; c; M Þ pðDjc; M Þ

ð18Þ

where p(w|c, M) is the prior distribution, p(D|w, c, M) is the likelihood function, and p(D|c, M) is a normalization factor. In Bayesian framework, the optimal weight should maximize the posterior probability p(w|D, c, M), which is equivalent to maximizing the function in (14). The performance ration parameter is optimized by applying the Bayes’ rule pðcjD; M Þ ¼

pðDjc; M ÞpðcjM Þ pðDjM Þ

ð19Þ

If we assume a uniform prior distribution p(c|M) for the regularization parameter c, then maximizing the posterior probability is achieved by maximizing the likelihood function p(D|c, M). Since all probabilities have a Gaussian form, it can be expressed as pðDjc; MÞ ¼ ðp=cÞN=2 ½p=ð1  cÞL=2 ZF ðcÞ

ð20Þ

where L is the total number of parameters in the NN. Supposing that F has a single minimum as a function of w at w* and has the shape of a quadratic function in a small area surrounding that point, ZF is approximated as [23]: ZF  ð2pÞL=2 det1=2 H  expðFðw ÞÞ

ð21Þ

where H = cr2ED ? (1 - c)r2EW is the Hessian matrix of the objective function. Using (18) into (17), the optimum value of c at the minimum point can be determined. Foresee and Hagan [24] propose to apply Gauss–Newton approximation to Hessian matrix, which can be conveniently implemented if the LevenbergMarquardt optimization algorithm [25] is used to locate the minimum point. This minimizes the additional computation required for regularization.

simulation, we used a network topology with 15 nodes and 26 links, each link having a link capacity of 20 Mbps. Call arrivals and lifetimes were modeled after a Poisson distribution and bandwidth demand for connections followed a uniform distribution. A data set consisting of 880 data was collected and used for training the ANN model. Each data sample is comprised of parameter values for mean bandwidth demand of BA calls sBA , mean bandwidth demand of IR calls sIR , mean arrival rate of BA calls kBA , mean arrival rate of IR calls kIR , mean call holding time of BA calls TBA , mean call holding time of IR calls TIR , BA limit l, tuning parameter c, and the corresponding IR call preemption rate q. The data set covers wide range of combinations of network and traffic parameters. Eighty-five percentage of the total data set was used for training purpose, and the rest 15% was used to test the ability of the model to produce correct value of ‘c’ (tuning parameter). Scale conjugate gradient (SCG) and Bayesian regularization (BR) backpropagation algorithm have been used for training the data. The network consists of 8 inputs and 1 output (Fig. 2). The experiment was conducted using different combination of hidden layers and hidden units (e.g., 5 layers with 40, 25, 15, 10, 5 neurons, 5 layers with 60, 40, 25, 15, 10, neurons, 4 layers with 40, 25, 15, 10 neurons, 4 layers with 60, 30, 20 neurons, 3 layers with 40, 20, 10 neurons, and 3 layers with 25, 15, 10 neurons). Three hidden layers consisting of 40, 25, and 15 neurons, respectively, were found to provide the best match on the test data measured in terms of mean squared error calculated over the target and estimated ‘c’ values. Since the performance of an ANN network depends on the initial weights and other learning parameters, a number of trails were conducted; each trail was continued for 40,000 epochs unless terminated at a predefined mean squared error. For simulation of this model, we used the standard ANN tools in MATLAB version 6.1 (Mathworks, USA). 4.1 Accuracy of c parameter estimation in the proposed ANN model Table 1 shows the mean squared error, maximum deviation with respect to target ‘c,’ and mean error on test data for the best trial in SCG and BR algorithms. The table indicates that BR algorithm is more suitable for the estimation of ‘c’ values (3.4% less mean error) compared with SCG algorithm.

Table 1 Accuracy of estimation in ANN model

4 Simulation results Both training and testing data have been collected from the simulation results carried by Ahmad et al. [3]. For

123

Training algorithm

Mean square error

Maximum deviation (%)

Mean % error

SCG

0.0764

12.7

2.51

BR

0.0610

11.2

1.88

Neural Comput & Applic (2012) 21:81–90

87

Results found on the test data for SCG and BR algorithm are also reported in Fig. 3. It shows that in almost 47% of the test data, the ‘c’ value estimated by BR algorithm lies within 1% error margin with respect to the target value, while for SCG algorithm, around 38% of the test data lie within 1% error. In 23% of the test data, estimated ‘c’ value differs from the target value in BR algorithm by an error margin in the range of 1.0–2.5%, while in almost 21% cases, error lies within the range of 2.5–5.0%. Around 5% of the total output was found to deviate by more than 5% error on target value. These data causing high deviation are mainly for higher target value of ‘c’ (c [ 11.0) for which preemption rate approaches very close to zero, and larger ‘c’ values result in very small difference in preemption rate. In only 0.7% of test data, the estimated ‘c’ value deviates 10% or more from its actual value in BR algorithm. From Table 1 and Fig. 3, it can be concluded that the BR algorithm is best suited for the current problem, and hence, the rest of the simulation results presented in this section are based on BR algorithm. A potential problem in learning is the unsmoothness of the trained weights, which may contribute to a network’s poor performance in generalization. Bayesian regularization technique incorporates the magnitude of weight values into the objective function to minimize leading to a network with less variation among the trained weights, which results in better performance. The value estimated by the ANN model with BR algorithm is then fed to DLAT model. Because of the deviation of ANN estimated value from the target value, network performance parameters like preemption rate, utilization, and call blocking rate also deviate from the desired values. Further investigation was done to assess this impact. Since the test data are quite large (132), a certain number of test samples were chosen for presentation following the frequency distribution shown in Fig. 3. These samples, target and predicted ‘c’ values, and their deviations are shown in

Fig. 4: sample number 1–4 (\1%), 5–6 (1–2.5%), 7–8 (2.5–5%), 9 (5–7.5%), 10 (7.5–10%), and 11 ([10%), the number within the bracket indicates the error range. 4.2 Impact of c parameter estimation on network performance In previous section, we discussed about the accuracy of c parameter estimation in the proposed ANN model. In this section, we show the impact of c parameter estimation on network performance parameters like call preemption rate, network utilization, and call blocking rate. Figure 4 shows that actual value of ‘c’ provided by the ANN model is very close to the target value of ‘c’. Impact of actual and target value of ‘c’ on preemption rate is reported in Fig. 5. In the worst case, for sample #11, which is the representative of ANN-estimated ‘c’ value with the maximum percentage (11.2%) error, the difference in preemption rate is found to be 2.29%. For most of the data, the achieved preemption rate matches very closely with the preemption rate corresponding to the target value of ‘c’. Resource utilization corresponding to sample data is shown in Fig. 6. Result shows that the difference in utilization achieved by the target and actual value of ‘c’ is less than 0.1% for all the representative samples. For further investigation into the suitability of the proposed ANN model, a complete set of data with network parameters of mean BA bandwidth demand 1.25 Mbps, mean IR bandwidth demand 128 kbps, mean call duration of BA and IR calls 300 s, mean IR arrival interval 2.78 s, mean BA arrival interval 70 s, and BA limit l varying from 0.1 to 1.0 was separated from the training set and tested in the trained ANN model. The aim was to investigate the performance of the model for a whole set of BA limit. Figure 7 shows that, for all BA limits, the actual output of ANN model closely matches with the target value of ‘c’. Maximum difference in ‘c’ value was found for BA limit

0.5 BP_BR

25

BP_SCG

0.45

Actual

Target

0.4

20

0.3

15

0.25

c

Frequency

0.35

0.2

10

0.15 0.1

5

0.05 0

0 0-1

1-2.5

2.5-5.0

5-7.5

7.5-10.0

10.0+

Error(%)

Fig. 3 Frequency distribution of percentage error on test data

1

2

3

4

5

6

7

8

9

10

11

Sample no

Fig. 4 Comparison of ‘c’ values on test data

123

88

Neural Comput & Applic (2012) 21:81–90 0.035

20 Target

Actual

18

0.03

Target

Actual

0.025

14

0.02

12

'c'

Preemption rate

16

0.015

10 8 6

0.01

4

0.005

2 0 1

2

3

4

5

6

7

8

9

10

0

11

0.1

0.2

0.3

0.4

Sample No

Fig. 5 Comparison of preemption rate on test data

0.7

0.8

0.9

1

0.035

Actual

Target

0.9

Actual

0.03

0.8

Preemption rate

0.7

Utilization

0.6

Fig. 7 Comparison of preemption rate on fixed data set

1 Target

0.5

BA Limit

0.6 0.5 0.4 0.3

0.025 0.02 0.015 0.01

0.2 0.005

0.1 0 1

2

3

4

5

6

7

8

9

10

0

11

0.1

0.2

0.3

0.4

Sample No

0.8, which amounts 2% on the target value of 15. The impact of deviation of the estimated ‘c’ value from its target value in terms of preemption rate, utilization, and call blocking probability is reported in Figs. 8, 9 and 10. Average amount of deviation (qtarget - qactual)/qtarget in preemption rate is 1.01%, while the maximum deviation is 3.6% for BA limit 0.8. Target and actual values in terms of utilization and call blocking probability are found to match very closely. Average deviation observed in utilization and call blocking rate is 0.0154 and 0.018%, respectively.

0.6

0.7

0.8

0.9

1

Fig. 8 Comparison of preemption rate on fixed data set 0.86 0.84 Target

Actual

0.82

Utilization

Fig. 6 Comparison of network utilization on test data

0.5

BA Limit

0.8 0.78 0.76 0.74 0.72

4.3 Result analysis of desired operating preemption rates

0.7 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

BA Limit

The motivation of this work is to devise a model that is capable of estimating the value of ‘c’ so that the target operating preemption rate is achieved under all possible network operating conditions. To assess whether the proposed ANN model is capable of achieving the target preemption rate closely, simulation was done with a random target value of preemption rate. The value of ‘c’ produced

123

Fig. 9 Comparison of network utilization on fixed data

by the model was then fitted into the DLAT model, and actual preemption rate attained was determined. Comparison of the target and actual preemption rate achieved is shown in Fig. 11 and 12. Two different target values were

Neural Comput & Applic (2012) 21:81–90

89

0.5

0.0022

Target

Actual

0.002

0.4 0.35

Preemption rate

Call blocking rate

Actual

Target

0.45

0.3 0.25 0.2 0.15

0.0018

0.0016

0.0014

0.1

0.0012

0.05 0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0.001 0.1

BA limit

0.2

0.3

0.4

0.5

BA Limit

Fig. 10 Comparison of call blocking rate on fixed data Fig. 11 Comparison of preemption rate for l = 0.1–0.5 0.034 Target

Actual

0.032

Preemption rate

chosen, 0.002 for BA limit l = 0.1–0.5 and 0.03 for BA limit l = 0.6–1.0. From the figure, it is clearly evident that the ANN model is capable of achieving the target preemption rate closely. The maximum mismatch is found for target preemption rate at BA limit l = 0.7, and the amount of deviation is 2.5%, which is small in consideration of network dynamicity. At all other BA limits, the error lies well within the range of 0–2.5% error.

0.03 0.028 0.026 0.024

5 Conclusion 0.022

This paper demonstrates the use of neural networks in the modeling of look-ahead time in BA reservation for controlling preemption rate of ongoing IR calls in a QoSenabled network. The data set to train the model was created by simulating the previously proposed DLAT-based CAC model with different traffic and tuning parameter values that govern the preemption rate. Once trained, the ANN model can estimate the value of tuning parameter under all network operating conditions to achieve the desired operating preemption rate. Result analysis demonstrates a close match between the target and actual value of desired preemption rate, utilization, and call blocking probability. In response to desired preemption rates, the proposed ANN model estimates c parameter values with a mean square error of 0.061 compared with the target c values, which when fitted in the DLAT model results in preemption rates that closely match the desired operating preemption rates (less than 1% deviation in over 90% cases) set by network operator. Other network performance parameters like network utilization and call blocking rates do not deviate much from the target scenario because of the ability of the proposed ANN model to accurately estimate c parameter values. The proposed model is highly suitable for practical large-area networks where network load varies

0.02 0.6

0.7

0.8

0.9

1

BA limit

Fig. 12 Comparison of preemption rate on l = 0.6–1.0

from time to time, yet QoS needs to be maintained. The ANN model when used in conjunction with DLAT model can further improve its potential to maintain QoS by appropriately controlling the IR call preemption rate.

References 1. Greenberg AG, Srikant R, Whitt W (1999) Resource sharing for book-ahead and instantaneous-request calls. IEEE/ACM Trans Netw 7:10–22 2. Schelen O, Pink S (1998) Sharing resources through advance reservation agents. J High Speed Netw Spec Issue Multi Net 7:3–4 3. Ahmad I, Kamruzzaman J, Ashwathanarayaniah S (2006) A dynamic approach to reduce peremption in book-ahead reservation in QoS-enabled networks. Comput Commun Elsevier Sci 29(9):1443–1445 4. Lin Y, Chang C, Hsu Y (2002) Bandwidth brokers of instantaneous and book-ahead requests for differentiated services networks. ICICE Trans Commun E85-B:1

123

90 5. Breiter F, Ku¨hn S, Robles E, Schill A (1998) The usage of advance reservation mechanisms in distributed multimedia applications. Comput Netw ISDN Syst 30(16–18):1627–1635 6. Campanella M, Chivalier P, Simar N (2001) Quality of service definition. http://www.dante.net/sequin/QoS-def-Apr01.pdf 7. Ahmad I, Kamruzzaman J, Aswathanarayaniah S (2005) Improved preemption policy for higher user satisfaction. Proceedings of the 19th international conference on advanced information networking and applications (AINA), vol 1. pp. 749–754 8. Chou L, Wu J (1995) Parameter adjustment using neural network based genetic algorithms for guaranteed QoS in ATM networks. IEICE Trans Commun E78-B:572–579 9. Kumar V, Venkataram P (2002) An efficient resource allocation scheme for mobile multimedia networks. Proc Mobile Wireless Commun Netw 5:88–92 10. Rovithakis G, Matamis A, Zervakis M (2002) Controlling QoS at the application level of multimedia applications using artificial neural networks: experiment results. Proc EUSIPCO 2:849–852 11. Tong H, Brown T (1998) Estimating loss rates in an integrated service network by neural networks. Proc IEEE GLOBECOM 1:19–24 12. Lazaro O, Girma D (2000) A Hopfield neural-network-based dynamic channel allocation with handoff channel reservation control. IEEE Trans Veh Technol 49:1578–1587 13. Cheng R, Chang C, Lin L (1999) A QoS-provisioning neural fuzzy connection admission controller for multimedia high-speed networks. IEEE/ACM Trans Netw 7:111–121 14. Benjapolakul W, Rangssihiranrat T (2000) Aggregate bandwidth allocation of heterogeneous sources in ATM networks with

123

Neural Comput & Applic (2012) 21:81–90

15.

16. 17. 18. 19. 20. 21. 22. 23. 24.

25.

26.

guaranteed quality of service using a well trained neural network. Proc IEEE APCCAS 5:348–351 Leng S, Subramanian K, Sundararajan N, Saratchandran P (2003) Novel neural network approach to call admission control in high speed networks. Int J Neural Syst 13(4):251–262 Lee S, Hou C (2000) A neural-fuzzy system for congestion control in ATM networks. IEEE Trans Syst Man Cyber 30(1):2–9 Ferrari D, Gupta A, Ventre G (1995) Distributed advance reservation of real-time connections. Proc NOSSDAV 29:15–26 Ross KW (1995) Multiservice loss models for broadband telecommunication networks. Springer, UK Rumelhart DE, McClelland JL, The PDP research group (1986) Parallel distributed processing, vol 1. MIT Press, Cambridge Hagan MT, Demuth HB, Beale MH (1996) Neural network design. PWS Publishing, Boston Moller AF (1993) A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw 6:525–533 Bishop CM (1995) Neural networks for pattern recognition. Oxford University Press, New York Mackay DJC (1992) Bayesian interpolation. Neural Comput 4:415–447 Foresee F, Hagan M (1997) Gauss–Newton approximation to bayesian learning. Proceedings of the 1997 international joint conference on neural networks, vol 3. pp 1930–1935 More JJ (1977) The Levenberg-Marquart algorithm: implementation and theory. In: Watson GA (ed) Numerical analysis. Lecture notes in mathematics, vol 630. Springer, UK, pp 105–116 Matlab 6.1, The MathWorks, Natick, MA. http://www. mathworks.com/products/matlab/