A Nonlinear-Predictive QoS-Promoted Dynamic ... - IEEE Xplore

0 downloads 0 Views 292KB Size Report
predicts packets arriving at a prediction interval for ONUs using a nonlinear pipelined structure of recurrent neural network. (RNN) with extended recursive least ...
A Nonlinear-Predictive QoS-Promoted Dynamic Bandwidth Allocation Scheme for Triple-play Services in Ethernet Passive Optical Networks Jan-Wen Peng1, 2, Chung-Ju Chang2, and Po-Ling Tien2

1 Chunghwa Telecom Laboratories, Taipei, 100, Taiwan. Dept. of Electrical Engineering, National Chiao Tung University, Hsinchu, 300, Taiwan. E-mail: [email protected]; [email protected]; [email protected]

2

Abstract—This paper proposes a nonlinear-predictive QoS-promoted dynamic bandwidth allocation (NQ-DBA) scheme for upstream transmission in Ethernet passive optical network (EPON) systems. The proposed NQ-DBA scheme divides incoming packets of voice, video, data service into six priorities, whereas packets having less room before QoS requirements violation will be dynamically promoted to high priority. It predicts packets arriving at a prediction interval for ONUs using a nonlinear pipelined structure of recurrent neural network (RNN) with extended recursive least squares (ERLS) update so that the bandwidth allocation can be more up-to-date and then accurate. Simulation results show that the proposed NQ-DBA scheme achieves higher system utilization than the DBAM scheme by 4%, and 21%, 90%, 43%, respectively. Index Terms—Ethernet passive optical network (EPON), extended recursive least squares (ERLS), Nonlinear-predictive QoS-promoted dynamic bandwidth allocation (NQ-DBA), recurrent neural network (RNN).

I. INTRODUCTION

T

HE EPON system has to support triple-play services in today’s optical access networks, and must fulfill the quality-of-services (QoS) requirements of each service. Kramer, Mukherjee and Pesavento [1] proposed an IPACT mechanism for ONUs upstream multiple access, it assigned bandwidth by polling all ONUs’ demands, but quality of service (QoS) criteria, such as voice/video packet delay and packet dropping probability, were not considered. A traffic-class burst-polling based delta dynamic bandwidth allocation (TCBP-DDBA) scheme was studied in [2]. This scheme not only reallocated extra bandwidth to the heavily loaded ONUs to improve under utilization problems but also provided QoS guarantee to delay sensitive services. However, it did not individually allocate bandwidth to each class and resulted in the longest delay to non-delay sensitive services if the traffic intensity was heavy and the ONU did not arrange the transmission well. Several upstream scheduling algorithms with predictor as in [3]–[5] were proposed to deal with both the QoS and efficiency. Luo and Ansari [3], [4] proposed a dynamic bandwidth allocation with multiple services (DBAM) scheme and a limited sharing with traffic prediction (LSTP) scheme. The DBAM scheme adopted a limited bandwidth allocation (LBA) and employed a linear estimation credit for prediction to estimate ONU queue’s traffic arriving during cycle time. The Supported by NSC-95-2221-E-009-052-MY3

ONUs applied priority queueing to buffer the packets, and a maximum window of traffic in each ONU is pre-assigned. The LSTP scheme used the same upper bounded maximum window of ONUs as the DBAM, but with a linear predictor and a least-mean square (LMS) update algorithm in ONUs to predict the data traffic arrived. Yin et al. [5] proposed a nonlinear prediction-based dynamic bandwidth allocation scheme (NLPDBA) to improve the upstream transmission efficiency and prediction accuracy. The NLPDBA used the same LMS update algorithm as the LSTP scheme and controlled the estimated result for stability. However, when the predictor underestimates or overestimates the traffic, it may drop packets or waste bandwidth. In this paper, we propose a nonlinear-predictive QoS-promoted dynamic bandwidth allocation (NQ-DBA) scheme for upstream transmission in EPON. The NQ-DBA provides a QoS enhancement to ONUs, and the QoS can be guaranteed in advance, especially when the traffic intensity is larger than 0.8. The proposed NQ-DBA mechanism has different treatments to packets’ transmission. Real-time service is delay sensitive, and the priority will be raised if the packets will violate the QoS requirements at the beginning of next cycle. Similarly, non-real-time service will raise its priority status if the waiting bound is violated while data packets do not expect to have such a longer delay time that the service will be in the starvation condition. Besides, a nonlinear pipelined structure of recurrent neural network (RNN) with extended recursive least square (ERLS) [6] update predictor is adopted to precisely predict arrival packets at ONU during the next prediction interval. The predictor has good nonlinear prediction capability and fast convergent time, and we had successfully employed nonlinear pipelined RNN to predict the interference variation of DS-CDMA/PRMA system [7]. Simulation results show that proposed NQ-DBA can fulfill the video packet dropping probability requirement whereas the DBAM scheme fails. The rest of the paper is organized as follows. Section II describes the system model. Section III introduces the NQ-DBA scheme, includes the QoS-promoted operation, the nonlinear predictor and the dynamic bandwidth assignment procedure. Simulation results and discussions are presented in section IV. Finally, concluding remarks are given in section V.

II. SYSTEM MODEL Fig. 1 shows the upstream transmission between the NQ-DBA at optical line terminal (OLT) and optical network

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

units (ONUs) in an EPON system. Assume that there is one OLT and M ONUs split by 1: M splitter in an EPON system. The EPON system supports real-time voice, real-time video, and non-real-time data services. In ONUi, three types of queues are provided to store voice, video, and data packets, which are denoted by Q0,i, Q1,i, and Q2,i, respectively, 1≤i≤M. The incoming packets from users will be put into the corresponding queues at ONU according to their service types. The arriving packet will be dropped if its queue is full, and the packet will be Ti (n − 1) = ( TM+i − Ti ) T1(n − 1) = ( TM+1 − T1 )

T1

Ti Gi(n − 1)

G1(n − 1)

ONU1 packet

t1

TM GM(n − 1)

ONUi packet R1(n − 1)

Am,i(n − 2 )

ti

TM+1

ONUM packet Ri(n − 1)

TM+i

G1(n)

ONU1 packet

tM+1

t

ONUi packet R1(n)

RM(n − 1)

tM

OLT

Gi(n)

tM+i

Ri(n)

ONUs

Estimated packet arrivals during prediction interval Ti (n − 1)

Fig. 1. Upstream transmission between OLT and ONUs in EPON system

discarded if its QoS requirement is violated. There is a GATE (REPORT) message sent from OLT (ONU) to ONU (OLT) to manage the transmission between OLT and ONUs. Assume that the EPON system is at the present cycle (n−1), which starts from when OLT sends the GATE message for ONU1 at cycle (n−1), denoted by G1(n−1), to when the OLT sends the G1(n) for the next cycle n. The NQ-DBA assumes a prediction interval for ONUi at cycle (n−1), denoted by Ti(n−1), 1 ≤ i ≤ M, which starts from when OLT receives the first packet from ONUi with the (n−1)th REPORT message (at time Ti of Fig. 1) to when OLT receives the first packet of ONUi with the nth REPORT message (at time TM+i of Fig. 1). The Ti(n−1) is not necessarily equal to the Tj(n−1), ∀n , when i ≠ j, 1≤i, j≤M. Assume that the NQ-DBA can know when the TM+i will be. The GATE message for ONUi at cycle (n−1), Gi(n−1), is a set of {G0,i(n−1), G1,i(n−1), G2,i(n−1)}, where Gm,i(n−1) denotes the granted bandwidth for the type m of service traffic in ONUi, m ∈ {0, 1, 2}. When the ONU receives the GATE message from the OLT, it transmits packets at its assigned timeslot and piggybacks a REPORT message at the end of the packet. The REPORT message from ONUi to OLT at cycle (n−1), denoted by Ri(n−1), is a set of {L0,i(n−1), L1,i(n−1), L2,i(n−1), Ldp,i(n−1), Ld,i(n−1), Lw,i(n−1)}. The Lm,i(n−1) is the occupancy of queue Qm,i at cycle (n−1), and the Ldp,i(n−1), Ld,i(n−1), and Lw,i(n−1) are numbers of bytes that had better be transmitted at the next cycle otherwise these packets will be dropped and/or the QoS requirement will be violated. The QoS requirements are setting and derived as following: the Ldp,i(n−1) indicates the total amount of video packets that will violate the video packet delay requirement, denoted by Td*, if they are not transmitted at the next cycle. Denote Td,k to be the delay time of the kth packet in Q1,i of ONUi, where k=1 means the first packet in Q1,i. Denote x to be the xth packet with the least delay time, which will violate the delay requirement before the next prediction interval, such that any video packet

queued before xth packet will be dropped. Then x and the Ldp,i(n−1) can be calculated by x = arg min {Td ,k + Ti ( n − 1), ∀n, k , i, and Td , k + Ti (n − 1) > Td * }, (1) k

Ldp ,i (n − 1) =

x

∑ S k ,1 ,

(2)

k =1

where Sk,1 is the number of bytes of the kth packet’s size in Q1,i, 1≤i≤M. The Ld,i(n−1) represents the total amount of video packets which should be transmitted at the next cycle, otherwise both the video packet delay requirement will be violated, and the requirement of video packet dropping probability, denoted by Pd*, cannot be kept. An observation moving time window is adopted to calculate the video packet dropping probability. Assume it contains the latest N output video packets of ONUi, which have been dropped and transmitted, or are going to be dropped or transmitted at the next cycle. Also assume that there are Nd video packets, among the N video packets, which have been dropped so far. From (1), there are x video packets already waiting in the queue Q1,i. Thus, a number of packets among these x packets, denoted it by y, must be transmitted otherwise the requirement of video packet dropping probability, Pd*, will be violated. Then y can be obtained by





y = ( N d + x − N × Pd* ) + ,

(3)

where (a)+ = a if a ≥ 0, (a)+ = 0 if a < 0; and ⎣b⎦ denotes the smallest integer greater than b. Then the Ld,i(n−1) can be derived by Ld ,i (n − 1) =

y

∑ S k ,1 ,

(4)

k =1

The Lw,i(n−1) is the total amount of data packets whose waiting time will be larger than a starvation-threshold time, denoted by Tw*, at the next cycle. It tells OLT how much bandwidth is required for data packets in Q2,i to prevent starvation. Denote Tw,k to be the waiting time of the kth data packet in Q2,i of ONUi, 1 ≤ i ≤ M. Then a number of packets with a waiting time larger than the starvation-threshold time, Tw*, had better be served at the next cycle. Denote the number of packets to be z, and the z and the Lw,i(n−1) are given by

z = arg min {Tw,k − Tw* , ∀k , i, and Tw,k − Tw* > 0}, k

Lw,i (n − 1) =

z

∑ S k ,2 .

(5) (6)

k =1

where Sk,2 means the number of bytes of the kth packet’s size in Q2,i, 1≤i≤M. III. NQ-DBA SCHEME The NQ-DBA begins the prediction procedure for the ith ONU when it receives the ith ONU REPORT message at the present cycle (n−1), Ri(n−1), 1≤i≤M. It adopts a nonlinear recurrent neural network and extended recursive least square learning algorithm for prediction. When the predictions for all ONUs have accomplished, the NQ-DBA performs the bandwidth allocation. The bandwidth allocation is according to

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

the REPORT messages and the predicted arrival packets of service traffic of all ONUs. Then the NQ-DBA at OLT sends the GATE message to ONUi for the next cycle, Gi(n), 1≤i≤M. A. Nonlinear predictor For prediction of the arrival packets during the prediction interval of ONUi at present cycle (n−1), Ti(n−1), whereas Ti(n−1) is also an updating period. The nonlinear predictor would have to estimate the packet arrival rate of service type m traffic of ONUi during Ti(n−1), which is defined as the ratio of actual arrival packets to the updating period, denoted by δm,i(n−1). From Fig. 1, the actual packet arrival rate at prediction interval Ti(n−2), λm,i(n−2), can be obtained from the reported queue occupancy and the granted bandwidth. Let Am,i(n−2) be the amount of actual arrival packets at Qm,i at previous prediction interval, Ti(n−2). At prediction interval Ti(n−1), the NQ-DBA can obtain the amount of Am,i(n−2) according to the reported queue occupancy Lm,i(n−1), Lm,i(n−2) and the granted bandwidth Gm,i(n−1). If Lm,i(n−1) > 0, then the amount of actual arrival packets Am,i(n−2) can be determined. If Lm,i(n−1) = 0, assume that the actual arrival packets arrive in uniform distribution, the Am,i(n−2) can be obtained by ⎧G m,i (n − 1) − Lm,i (n − 2) + Lm,i ( n − 1), if Lm,i ( n − 1) > 0, ⎪⎪ (7) Am,i ( n − 2) = ⎨ ⎪ G m,i ( n − 1) − Lm,i ( n − 2) , if Lm,i (n − 1) = 0. ⎪⎩ 2

Then the λm,i(n−2) can be derived by Am,i(n−2)/Ti(n−2). In the implementation of the nonlinear packet arrival rate predictor, a fully connected recurrent neural network (RNN) structure with R neurons and p + q + R input nodes can be adopted [6]. However, the computational complexities are pretty high. Instead a pipelined structure is here considered for its computation efficiency. In our design, as depicted in Fig. 2, the nonlinear predictor refers to the RNN predictor with a pipelined structure [8], the predicted value of the δm,i(n−1) can be determined from p previous measured packet arrival rates λm,i(k), n−p−1≦k≦n−2, and q predicted errors, em,k(j), n−q−1 ≦ j ≦ n−2, and em,k(j)=λm,i(j)−δm,i(j). The nonlinear predictor will yield a one-step predicted value of the packet arrival rate. It can be expressed as δ m,i (n − 1) = H (λm,i (n − 2),..., λm,i (n − p −1);δ m,i (n − 2), ..., δ m,i (n − q − 1)), (8) where H(●) is an unknown nonlinear function to be determined. The nonlinear predictor is consist of q levels of processing units, and each level has a RNN module with N neurons, (d+N+1) input nodes and a comparator, where d=p−q+1 and q×N=R. The first d input nodes of the kth neural network module are the external inputs, which are the delayed signals from λm,i(n−k−1) to λm,i(n−d−k); the (d+1)th input node is a constant bias and always set to unity; the (d+2)th input node is the output of the first neuron in the (k+1)th module, yk+1,1(n−2), if k ≠ q, or it is the feedback signal from the first neuron’s output of module q in cycle (n−3), yq,1(n−3), if k = q; and the remaining (N−1) input nodes are feedbacks from 2−N neurons’ output of the same module, yk,2(n−2)−yk,N(n−2). The weight of the connection from the jth input node to the kth neuron is given

by wk,j(n−2), 1≤j≤d+N+1, 1≤k≤N. The nonlinear predictor yields δm,i(n−1), which is the first output of the first module y1,1(n−2), given by net sum of all inputs λm,i(n − q − 2)

λm,i(n − q − 1) λm,i(n − k − 2)

z-1I

···

z-1I

λm,i(n − k − 1)

···

d

d

λm,i(n − 3)

z-1I

d

d

λm,i(n − 2)

Module yq,1(n − 2) y (n− 2) Module y (n − 2) y (n − 2) Module y1,1(n − 2)=δm,i (n − 1) k+1,1 k,1 2,1 q k 1 ··· ··· W W W

z-1I

z-1I

N−1

yq,1(n − 3) -1



z I

+

em,q(n−2)

z-1I

N−1

− λm,i(n − q − 1)

+ λ (n − k − 1) m,i

em,k (n − 2)

N−1

−+

λm,i(n − 2)

em,1(n − 2)

Fig. 2. Nonlinear predictor structure. ⎛ d ⎜ j =1 ⎝

δ m,i (n − 1) = ϕ ⎜ ∑ wk , j (n − 2)λ m,i (n − j − 1) + w1,d +1 + w1,d + 2 (n − 2) y 2,1 (n − 2)

+

d + N +1



j = d +3



∑ w1, j (n − 2) y1, j −d −1(n − 3) ⎟⎟,

(9)

where φ(●) is a monotonically increasing sinusoidal function of each neuron, expressed as 1 ϕ (x ) = , (10) 1 + exp(− x)

The extended recursive least squares (ERLS) is applied as the learning algorithm for nonlinear predictor. In order to reduce the complexity, all the modules of the RNN are designed to have exactly the same synaptic weight matrix. Hence, each level of the nonlinear predictor is a sub-prediction and each RNN module has a prediction error, and the total prediction errors of the predictor must be combined for weight adjustment. The prediction errors for each RNN module around prediction interval Ti(n−2), em,k(n−2), 1≤k≤q, and the nonlinear predictor, Em,i(n−2) are, respectively, defined as em,k (n − 2) = λm,i (n − k − 1) − yk ,1 (n − 2), (11) q

E m,i (n − 2) = ∑ ξ k −1em,k 2 (n − 2),

(12)

k =1

where yk,1(n−2) is the output of kth RNN module, and ξ∈ (0,1] is the forgetting factor. The term ξ k−1 is an approximate measure of the memory of the individual modules in the predictor. The cost function of ERLS is defined as

ε ERLS (n − 2) =

n−2



k =1

ξ n−k −2 E m,i (k ),

(13)

The ERLS algorithm minimizes the cost function in (13) and then updates the weights of the neurons in the modules accordingly. When the nonlinear predictor has finished the prediction of the packet arrival rate during Ti(n−1), δm,i(n−1), the predicted queue occupancy of Qm,i at the end of Ti(n−1), denote it by Pm,i(n−1), can be obtained by Pm,i ( n − 1) = Lm,i (n − 1) + [δ m ,i ( n − 1) × Ti ( n − 1)] . (14)

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

B. QoS-Promoted DBA Procedure The NQ-DBA scheme classifies the three types of services traffic from ONUs into six priorities and provides a QoS promotion capability at ONUs. The first priority is voice packets; the second priority is video packets which have violation problem of packet dropping probability requirement; the third priority is video packets which have violation problem of delay requirement; then data packets have violation problem of starvation-threshold requirement as fourth priority; then video packets as fifth priority, and data packets as the least priority. Service packets are transmitted in accordance with their priorities, and these packets which will violate the QoS requirement will be promoted to higher priority. The granted bandwidth to the service of any priority for ONUi is based on the predicted queue occupancy of Qm,i, Pm,i, given in (14), and Ldp,i, Ld,i, and Lw,i given in (2), (4), and (6), respectively. As shown in Fig. 3, the granted bandwidth to the service of any priority is based on the ONU’s reporting information of the service of that priority if the residual bandwidth is no less than the sum of all ONUs’ information and prediction of the service of the specific priority. The NQ-DBA scheme allocates bandwidth in unit of bytes to all ONUs from the service of the highest priority to that of the lowest one successively until all the bandwidths are used up. The voice service plays the highest priority of service because it is strictly delay sensitive. Based on the predicted occupancy of Q0,i, P0,i, the allocated bandwidth to voice packets of ONUi will be set firstly. To guarantee QoS requirement of video packets, the NQ-DBA secondly allocates the bandwidth to the video packets which will be dropped if they are not transmitted at the next cycle. Based on the reported video packet with problem of dropping probability and delay, Ld,i, Ldp,i, and the residual bandwidth of fiber link, the allocated bandwidth to video packets with the second and the third priorities can be decided. Then, the NQ-DBA continues to allocate the bandwidth to the fourth priority data packets, whose waiting times exceed the waiting bound. Data packets do not be dropped even if they wait in the queue for a long time, but the starvation of data packets may occur. The starvation means that the packets are still not transmitted after a long delay. To avoid the starvation, the priority of data packets should be raised when their waiting bound is violated. In the NQ-DBA scheme, this kind of packets has a higher priority than video service with a non-violating delay time because video packets, which will be dropped, have already been processed, and the QoS requirement of residual video packets is satisfied. Therefore, based on the amount of reported data packets considering starvation, Lw,i, and the residual bandwidth of the fiber link, the NQ-DBA allocates the bandwidth to the data packets whose waiting time exceeds the starvation-threshold time Td*, to ensure QoS requirement of data service and avoid starvation. The NQ-DBA then allocates the bandwidth to the unallocated fifth priority video and unallocated sixth priority data packets in order. Finally, to make the best usage of the bandwidth and guarantee QoS further, the NQ-DBA assigns voice and video packets proportionally by sharing the residual bandwidth based on their predicted queue occupancy to use up

the bandwidth. When the final granted bandwidth for ONUi is given, the OLT sends the GATE message, Gi(n), to the ONUi, 1≤i≤M. Each ONU follows the information in the GATE message to transmit its own packets. At the same ONU, if there is some bandwidth left at some queues, the rest bandwidth can be reallocated to other queues of the same ONU.

START

Initialize (Grant=0, Step=1) Get ONU Info

PRNN Predictor

Step i in Bandwidth Allocation

Residual BW > Requirement

No

Yes Step i ++

Proportional Bandwidth Allocation

Granted Bandwidth = Requirement + Prediction

Update Grants

No

Residual BW == 0 ?

Yes Send Grant to ONU

STOP

Fig. 3. NonLinear Predictive QoS-Promoted DBA Procedure.

IV. SIMULATION RESULTS An event-driven packet-based simulation is elaborated to show the performance comparison among the proposed NQ-DBA scheme, the proposed NQ-DBA but without prediction, and the DBAM scheme. In the simulations, one OLT with 32 connected ONUs, and one 1:32 splitter for 20 (5) km distance to OLT (ONUs). The transmission rate is 1 Gbps (100Mbps) between OLT (ONU) and ONU (OLT). Each queue in ONU is equipped with 1 Mbytes buffer space. The guard time is set to be 1 μs. For simulation convenience, the system cycle and the prediction interval of all ONUs are assumed to be fixed at 0.72 ms. Every simulation result includes 100 simulation cycles, and each of which contains 1,000 updating periods. As for the nonlinear pipelined RNN predictor, parameters are selected as: N = 2; p = 4; q = 2; and ξ = 0.99 [8]. The voice traffic source is modeled as two-state Markov modulated deterministic process (MMDP) with α and β as the transition rates. The mean durations of talk spurts and silence periods are assumed to be exponentially distributed with 1/α =1 sec. and 1/β=1.35 sec., respectively. The voice traffic is packetized in the ONU in a 70-byte of frame, and the generate rate of voice packets is constant bit rate (CBR) on every 125μs during talk spurts (ON state). The highly burst video and data packets are modeled by ON-OFF Pareto-distributed source, which has typical mean ON period by 7.2 sec. with a “heaviness” of ρ = 1.4, and typical mean OFF period by 10.5 sec. with a “heaviness” of ρ = 1.2. The packet sizes are uniformly distributed between 64 and 1,518 bytes. The traffic arrival rates for each ONU are set as follows: (i)Voice: 4.48Mbps, (ii)Video: 0.55Mbps−15.55Mbps, (iii)Data:

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

DBAM NQ-DBA w/o predictor NQ-DBA

0.8 0.6

Video Packet Dropping Probability

System Utilization

1.0

0.4 0.2 0.0 1.0

2.0

3.0

4.0

5.0

6.0

7.0

8.0

9.0

0 .1 5 0 .1 4 0 .1 3 0 .1 2 0 .1 1 0 .1 0 0 .0 9 0 .0 8 0 .0 7 0 .0 6 0 .0 5 0 .0 4 0 .0 3 0 .0 2 0 .0 1 0 .0 0

DB AM NQ -DBA w/o p red ictor NQ -DBA

V id e o D rop p ing Pro b ability Req u ire m ent

0.1

0 .2

Traffic Intensity

0 .3

0.4

0 .5

0 .6

0.7

0 .8

0.9

Traffi c Int ens ity

Fig. 4. System utilization versus the traffic intensity.

0.28Mbps−7.67Mbps. The voice delay criterion is bound to 1.5 ms for “one way transmission time” in access network. The voice dropping probability is set to zero. The Td*, Pd*, and Tw*, are defined as 10 ms, 1%, and 500 ms, respectively. For DBAM, here we set the SLA of the DBAM scheme the same QoS requirement as the NQ-DBA scheme. Fig. 4 shows the system utilization versus the traffic intensity. Under the QoS constraint, it can be found that the proposed NQ-DBA has higher system utilization than the DBAM and the NQ-DBA without prediction by an amount of 4%, and 2% in average, respectively. It is because the NQ-DBA and the NQ-DBA without prediction allocate bandwidth step by step to six priorities, rather than the DBAM by a LBA to each class in advance. Also, the NQ-DBA and the NQ-DBA without prediction dynamically promote some video packets cycle by cycle to avoid packet dropping due to the violation of delay requirement, while the DBAM scheme just applies priority queueing. Moreover, the nonlinear predictor also helps the NQ-DBA scheme to enhance the efficiency of bandwidth allocation by providing more precise estimation and thus more accurate allocation bandwidth for ONUs than the NQ-DBA without prediction and the DBAM scheme with linear prediction. Fig. 5 illustrates the average video packet dropping probability versus the traffic intensity in EPON. Unfortunately, the video packet dropping probability in DBAM cannot be satisfied, due to it adopts LBA by setting an upper bound of grants to video packets, and inaccurate linear class-based traffic prediction. However, the average video dropping probability of the NQ-DBA scheme and NQ-DBA without prediction is almost zero when the traffic intensity is below 0.8. It is because the priority of video packets with the problem of delay requirement can be raised in both the NQ-DBA scheme and NQ-DBA without prediction scheme. When the traffic intensity is larger than 0.8, the average video dropping probability exceeds the video dropping probability requirement due to the fiber link capacity limitation.

Fig. 5. Average video packet dropping probability versus traffic intensity.

highest to the lowest, which guarantees the QoS requirements of real-time services while preserving the grade of service of the non-real-time packets. Moreover, thanks to the nonlinear predictor [7] with high accuracy and fast convergence, the NQ-DBA accurately predicts the burst traffic and thus significantly improves the system efficiency. Simulation results show that the NQ-DBA scheme achieves the highest system throughput and guarantees the stringent QoS requirement of video packets, which outperforms DBAM and NQ-DBA without prediction. REFERENCES [1] [2] [3] [4] [5]

[6] [7]

[8]

G. Kramer, B. Mukherjee, and G. Pesavento, “IPACT: A dynamic protocol for an Ethernet PON (EPON),” IEEE Commun. Mag., vol. 40, Feb. 2002, pp. 74–80. Y. Yang, J. Nho, N. P. Mahalik, K. Kim, B. Ahn, “QoS provisioning in the EPON systems with traffic-class burst-polling based delta DBA,” IEICE Trans. Communications, 2006, vol. 89-B, Jan. 2006, pp. 419–426. Y. Luo and N. Ansari, “Bandwidth allocation for multiservice access on EPONs,” IEEE Commun. Mag., vol. 43, Feb. 2005, pp. s16–s21. ——, “Limited sharing with traffic prediction for dynamic bandwidth allocation and QoS provisioning over Ethernet passive optical networks,” OSA J. of Optical Ntwk, vol. 4, No. 9, Aug. 2005, pp. 561–572. S. Yin, Y. Luo, N. Ansari, and T. Wang, “Non-linear predictor-based dynamic bandwidth allocation over TDM-PONs: stability analysis and controller design,” presented at ICC '08. IEEE International Conference on Communications, May 2008, pp. 5186–5190. S. Haykin and L. Li, “Nonlinear adaptive prediction of nonstationary signals,” IEEE Trans. on Sig. Proc., vol. 43, Feb. 1995, pp. 526–535. C. J. Chang, B. W. Chen, T. Y. Liu, and F. C. Ren, “Fuzzy/Neural Congestion Control for Integrated Voice and Data DS-CDMA/FRMA Cellular Networks,” IEEE J. Select. Areas Commun, vol. 18, No. 2, February 2000, pp. 183-293. J. Baltersee and J. A. Chambers, “Nonlinear adaptive prediction of speech using a pipelined recurrent neural network,” IEEE Trans. Sig. Proc., vol. 46, Issue 8, August 1998, pp. 2207–2216.

V. CONCLUSION In this paper, a nonlinear-predictive QoS-promoted dynamic bandwidth allocation (NQ-DBA) algorithm is proposed for EPON upstream transmission. Specifically, the NQ-DBA promotes packets to higher priority by QoS control and then performs bandwidth allocation based on the priorities from the

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.