A Predictive QoS Control Strategy for Wireless Sensor Networks

0 downloads 0 Views 668KB Size Report
Bernoulli benchmark, with the added benefit of not ... plementation of the Bernoulli benchmark method. ... In doing a Bernoulli trial, individual sensors gener-.
The 1st Workshop on Resource Provisioning and Management in Sensor Networks (RPMSN '05) in conjunction with the 2nd IEEE MASS, Washington, DC, Nov. 2005

A Predictive QoS Control Strategy for Wireless Sensor Networks Biyu Liang1, Jeff Frolik2, and X. Sean Wang1 Department of Computer Science, University of Vermont, USA1 Department of Electrical and Computer Engineering, University of Vermont, USA2 {bliang, xywang}@cs.uvm.edu, [email protected] Abstract The number of active sensors in a wireless sensor network has been proposed as a measure, albeit limited, for quality of service (QoS) for it dictates the spatial resolution of the sensed parameters. In very large sensor network applications, the number of sensor nodes deployed may exceed the number required to provide the desired resolution. Herein we propose a method, dubbed predictive QoS control (PQC), to manage the number of active sensors in such an overdeployed network. The strategy is shown to obtain near lifetime and variance performance in comparison to a Bernoulli benchmark, with the added benefit of not requiring the network to know the total number of sensors available. This benefit is especially relevant in networks where sensors are prone to failure due to not only energy exhaustion but also environmental factors and/or those networks where nodes are replenished over time. The method also has advantages in that only transmitting sensors need to listen for QoS control information and thus enabling inactive sensors to operate at extremely low power levels.

Keywords Wireless sensor networks, network spatial resolution, quality of service, distributed control

1. Introduction Broadly distributed wireless sensor networks (WSN) promise to provide environmental data with very high spatial and temporal resolutions. Deployments are envisioned which may utilize 10’s or 100’s of thousands of sensor nodes for purposes such as monitoring high-risk forests [1], homeland security, and industrial systems. Much work to date has been focused on adapting existing computer or wireless networking approaches to the energy constrained realm in which WSN must

operate [2]. However, many of these architectures require all nodes to be able to perform routing, processing, etc. all of which increase node cost. Costs can be considered in terms of energy usage, weight, size, reliability; functions all of which drive dollars per unit. In short, the overall system cost associated with making the nodes more capable and robust may become prohibitive for large scale applications. Another consideration is that the environment itself is harsh therefore causing sensors to fail at unpredictable rates. In either scenario, the networks may be initially over-deployed in that the total number of sensors at the beginning of life exceeds the number requisite to provide the desired system performance. At the system level, network life is maximized when only a minimum number of sensors needed to achieve the application’s data requirements are active. Thus controlling the number of participating sensors can be equated to controlling the network’s quality of service (QoS) (Iyer [4] and Frolik [5]). Herein, we consider a network architecture in which each node operates using local rules and with minimal control information to achieve the overall system QoS requirement. In this work, a Predictive QoS Control (PQC) strategy is proposed to manage the number of sensor active in large deployments. PQC is shown to approach a Bernoulli benchmark method proposed earlier [3] with respect to the equality in participation, network life and responsiveness to network dynamics. In the Bernoulli benchmark method, each sensor participates in the network with probability of p = Q/N; where N is the total number of sensors available, and Q is the QoS required. Explicit knowledge of N is requisite to implementation of the Bernoulli benchmark method. However, unless the network is continuously queried, N may be unknown due to individual nodes becoming depleted of energy or failing due to environmental or hardware causes. To query a node requires that it be awake, which is energy inefficient, simply to learn whether the sensor can provide information if required.

The introduced PQC methodology has advantages over the Bernoulli benchmark method in that it has neither constraint. The rest of this paper is organized as follows. Related work on the QoS control is discussed in §2. We present the PQC QoS control protocol, algorithms and analysis in §3. Results from a performance study and discussions of the results are given §4. We conclude the work in §5.

2. Related Work Use of an automaton structure as a basis for a spatial resolution control strategy was first proposed in [4]. Employing a simple Goore game strategy, QoS was shown to be controllable up to Q0 ≤ N/2 without knowledge of N but requiring all sensors to listen for control information. This constraint was removed by using a revised automaton in [5], such that only transmitting sensor were required to listen for control information. In addition, the new automaton was shown to enable QoS up to N. This latter automaton has been further analyzed with the following results: ƒ The mean and variance of the achieved QoS can be controlled by selection of automaton parameters. There is a tradeoff between variance and network diversity [6]. ƒ The mean and variance of QoS is scalable as network size increases. The technique is also expedient in comparison to an equal participating (Bernoulli benchmark) method [3]. The shortcoming of these existing automaton structures is that the participation among nodes in the network is less even than in the equal participating Bernoulli method. As such, the work to date has not been able to achieve lifetimes comparable to the Bernoulli benchmark. The key contribution of the work presented herein is to address this shortcoming while at the same time not requiring knowledge of N and not requiring all sensors to listen for control information.

3. Predictive QoS Control Strategy For the discussion herein, we will assume that the objective is to control the number of sensors reporting during each epoch in a single-hop cluster. The individual sensors will locally make the decision on when (which epoch) to transmit its next data packet based on a control value, p, provided by the clusterhead as part of an acknowledgement (ACK) for data received. The proposed strategy provides the same control value, p,

only to the nodes active in the present epoch. As to be detailed in following sections, the clusterhead determines the appropriate p based on the past, current and desired QoS. Individual sensors then utilize the control parameter to locally determine a schedule for transmitting their next data packet. Throughout we will assume that the amount of data to be sent in the network is very low in comparison to the available bandwidth; i.e., the network is not bandwidth constrained. As such, sensors may employ simple contention based protocols (e.g., ALOHA, slottedALOHA or CSMA) with high likelihood that their data packet will get through in the same epoch during which they are sent.

3.1 Mechanism to Control Data Transmission Rate of Individual Sensors In the PQC approach each sensor controls its own sending probability based on a transmission probability value, which is the control value it receives in the ACK from the clusterhead. We denote the transmission probability of sensor s as ps, where 0 ≤ ps ≤ 1. In doing a Bernoulli trial, individual sensors generate a uniformly distributed random number on the interval [0, 1] and transmit a data packet if it falls in the interval [0, ps]. This repeats for every epoch. As noted, the disadvantage of this technique is that it requires explicit knowledge of N and all the nodes need to be updated. To resolve these problems, we propose a timer-based mechanism for controlling data sending rate of individual sensors. To implement a timer-based method, the key parameter to control is the interval (epochs) between successive transmissions by a node; i.e., how long a node sleeps. If the inter-transmission interval is small a sensor will transmit more often. Explicitly, given the transmission probability ps, in the Bernoulli trial, we have the following distribution for the random variable κ, i.e., the length of inter-transmission interval, P(κ = k) = (1 − ps)k · ps

(1)

where k = 0, 1, 2, ···. To introduce our timer-based mechanism for controlling data sending rate, we first divide the range [0, 1] into smaller sub-ranges as [P(κ < k), P(κ ≤ k) ], where k = 0, 1, 2,···. A random number, ξ, is then generated following uniform distribution within the range [0, 1]. If the random number is within the kth sub-range: i.e., ξ ∈ ( P(κ < k), P(κ ≤ k) ], the sensor will set its own timer value, TV, to k. The sensor will then wake up to send a data packet after sleeping for k epochs (i.e. there are k epochs between

the epoch where it sent its last data packet and the epoch for it to wake up again). Setting the timer value as is discussed above will lead to the same probability distribution for the length of inter-transmission interval as performing a Bernoulli trial at each epoch. Indeed, the probability of ξ falling in the kth sub-range, thus sensor s getting a timer value k, is the length of this sub-range P(κ ≤ k) − P(κ < k), which exactly equals P(κ = k) of the Bernoulli trial. In implementation of the timer-based control, we imposed an upper bound TUB (each sensor has its own TUB, which depends on its ps) on the timer value to avoid the case of any sensor having too low of a response rate. An appropriate TUB can be derived from the control value p as follows. TUB

TUB

k=0

k=0

P(κ ≤ TUB) = ∑P(κ = k) = ∑(1− p)k ⋅ p = 1− (1− p)TUB+1 ≥1−α

(2) Here the confidence that a sensor with transmission probability p will transmit another data packet in no more than TUB epochs is 1 − α. Thus, in setting this upper bound, we use

3.2 Prediction Strategy for the Clusterhead to Determine the Control Value p The most straight-forward approach to determine the control value is to set p = Q/N. This is the Bernoulli benchmark method. However, there are several disadvantages associated this method, as is noted before. In the PQC strategy, the clusterhead keeps a record of network traffic information, i.e. control values used and the number of sensors that have transmitted, etc. in the recent epochs. Since TUB is the upper bound of the timer value, this is also the maximum memory size for this network traffic information. Figure 1 illustrates the prediction model employed by the clusterhead. Control information is sent out with each ACK and thus the next epoch e' is no longer controllable at the end of current epoch. As such we must determine the probability distribution of the number of data packets received during the epoch e'', the second epoch following the current one. K' adjusted to p M

TUB ≥ ln(α)/ln(1 – p) – 1.



i



3

2

1

(3)

Pseudo-code for the node control strategy is given in Algorithm 1. In this algorithm, it is also proposed that the sensor send its last timer value to the clusterhead along with the current reading data. This is needed to make the prediction and develop an updated control value p, as will be discussed in §3.2. Loop { 1. Wakeup and send current reading with last timer value TV to clusterhead; 2. Wait for ACK with control value p; 3. Update local control value to ps := p and determine the probability distribution P(κ = k) from (1). 4. Calculate TUB from (3). 5. Generate a uniform random number ξ within the range [0, 1]; 6. Find k that satisfies ξ ∈ ( P(κ < k), P(κ ≤ k) ]; 7. TV := min{k, TUB}; // timer value 8. Reset timer to TV and restart timer; 9. Sleep for the coming TV epochs; } Algorithm 1. Sensor Transmission Rate Control

current K''

e'

e''

Ke' K

t

desired QoS

Figure 1. Illustration of the prediction model As the distribution of the number of data packets that will be received during e'' depends largely on what control value pe' is used for epoch e', we must first determine an appropriate control value pe'. We denote the number of packets received during epoch i and the control value used as ni and pi respectively. Let mi be the number of sensors, amongst the ni ones, that have not transmitted since epoch i. The value of mi is initialized with ni; and upon receiving a data packet, the clusterhead will check the TV in the data packet to see when this particular sensor sent its last data packet. If the last transmission is during epoch i, then the corresponding mi will be deducted by 1. These values of ni, pi and mi are the network traffic information recorded at the clusterhead for prediction purpose. Take any of these mi sensors. Since it has not sent any packet until now (otherwise it shouldn’t have been counted in mi), by our discussion in §3.1, the probability that it sends a data packet during the epoch e' is: pi′ = pi

(4)

And, the probability that the sensor will send its next data packet during the epoch e'' is: pi′′ = (1 − pi ) ⋅ pi

(5)



Let the random variable Ki′ be the number of sensors that will send a packet during the epoch e' amongst these mi sensors, then K′i follows a binomial distribution, i.e., K′i ~ Bi(mi , pi′) . Thus, zi′ (k ) , the pdf for K′i is: ⎛m ⎞ zi′ (k ) := P( K i′ = k ) = ⎜⎜ i ⎟⎟ ⋅ pi′ k ⋅ (1 − pi′ ) mi − k ⎝k ⎠

(6)

Likewise the pdf for the random variable K i′′ , the number of sensors that will send their next data packet during the epoch e'' amongst these mi sensors, is: ⎛m ⎞ zi′′(k ) := P( K i′′ = k ) = ⎜⎜ i ⎟⎟ ⋅ pi′′k ⋅ (1 − pi′′) mi −k ⎝k⎠

(7)

Let random variables K' and K'' be defined for e' and e'' (see Figure 1) respectively as: K ′ = K 1′ + K 2′ + ... + K i′ + ... + K M′

(8)

K ′′ = K 1′′ + K 2′′ + ... + K i′′ + ... + K M′′

(9)

and

where M is the maximum memory size under the consideration. Random variable K' means the number of sensors that will send a data packet during epoch e'; while K'' means the number of sensors that will send their next data packet during epoch e'', but not in e'. As our mechanism for decrementing the mi ensures the independency between the random variables at righthand side of (8), the pdf of K' can be determined as the convolution of the individual pdfs of K′i ; that is, K' ~ z' = z1′ ⊗ z 2′ ⊗ L ⊗ z ′M , where ⊗ is the convolution operator. So P(K' = k) = z'(k). Likewise P(K'' = k) = z''(k), where the pdf z'' = z1′′ ⊗ z 2′′ ⊗ L ⊗ z ′M′ . If the random variable Ke' is the number of sensors that will send a data packet during the epoch e'' amongst the K' sensors that send a data packet at e' and thus get a control value update pe', then we have ⎧⎛ u ⎞ k ⎪⎜ ⎟ ⋅ p ⋅ (1 − pe′ )u−k P( Ke′ = k | K ′ = u) = ⎨⎜⎝ k ⎟⎠ e′ ⎪ 0 ⎩

Recall that variable pe' is the control value to be used during epoch e'. By averaging over all possible K' values, we can obtain the following approximate distribution of Ke'

,0≤k ≤u ,

k >u

(10)

P( K e′ = k ) = ∑ P( K ′ = u ) ⋅ P( K e′ = k | K ′ = u ) u =0 Kˆ

⎛u⎞ = ∑ z′(u) ⋅ ⎜⎜ ⎟⎟ ⋅ pek′ ⋅ (1 − pe′ )u −k u =k ⎝k ⎠

(11)

where Kˆ is the maximum possible K'. Finally, let the random variable K be the number of packets that will be received during epoch e'', then it can be seen from Figure 1 that K = K'' + Ke'. Thus k

P( K = k ) = ∑ P ( K e′ = v) ⋅ P ( K ′′ = k − v) v=0 k

= ∑ P ( K e′ = v) ⋅ z ′′(k − v) v =0

k ⎛ Kˆ ⎞ ⎛u ⎞ = ∑ ⎜⎜ ∑ z ′(u ) ⋅ ⎜⎜ ⎟⎟ ⋅ pev′ ⋅ (1 − pe′ ) u −v ⎟⎟ ⋅ z ′′(k − v) v =0 ⎝ u =v ⎝v⎠ ⎠ (12) This gives the probability distribution of the number of packets that will be received during epoch e''. Based on this distribution, we can calculate the appropriate pe' to be used in the epoch e'. Ideally, we should maximize the probability of K being the same as the desired QoS. Given the desired QoS requirement as the number of data packets in every epoch, we propose the optimization problem as maximization of the probability that K, the number of data packets received in an epoch, is within a range [Q − HB, Q + HB], which is centered at Q (the desired QoS). One may consider HB as half the band (interval) about which the optimization occurs (HB = 1 or 2 has worked well in simulations). To determine the appropriate control value, p, to be sent to the transmitting nodes, the following expression must be iteratively solved to maximize the probability P(Q − HB ≤ K ≤ Q + HB):

Q+ HB



k





⎛u ⎞

∑ ⎜⎜ ∑ ⎜⎜ ∑ z ′(u) ⋅ ⎜⎜ v ⎟⎟ ⋅ p

k =Q− HB

⎝ v =0 ⎝ u =v

⎝ ⎠

v e′

⎞ ⎞ ⋅ (1 − pe′ ) u −v ⎟⎟ ⋅ z ′′(k − v) ⎟ (13) ⎟ ⎠ ⎠

Note that this iteration is a single-variable optimization with respect to pe'. In our simulations, the clusterhead solves this optimization problem by evaluating (13) with each of 100 Monte Carol random points in the range [0, 1] as pe' and find out which pe' value gives the maximum value of (13).

3.3 Dealing with Sensor Failures We call those sensors that sent their last packets in epoch i as epoch i sensors. The number of epoch i sensors that haven’t shown up but are still alive (we call these potential sensors from epoch i) can be less than mi, since some of them may have already failed and will no longer send any data. We now determine a means to detect failure of sensors from the number of epoch i sensors that have shown up in the last two epochs respectively, i.e. ri,1 (for the current epoch) and ri,2 (for last epoch, i.e., the epoch immediately before the current epoch). Suppose there is no sensor failure, i.e., there are still mi potential sensors from epoch i by the end of the current epoch as indicated by recorded network traffic information, then we know that the random variable Ri,1, the number of epoch i sensors that should have shown up in the current epoch, follows the distribution Bi(mi+ri,1, pi); and that the random variable Ri,2, the number of epoch i sensors that should have shown up in last epoch, follows the distribution Bi(mi+ri,1+ri,2, pi). Based on these distributions, we can derive the confidence interval of certain percentage, e.g. 95%. When ri,1 and ri,2 fall out of their respective confidence interval, we interpret it as there are significant sensor failures among the mi sensors. Thus we cannot trust mi as the number of potential sensors anymore. In PQC, a two-step estimation is used to estimate the number of potential sensors from epoch i based on ri,1 and ri,2 as follows: mi = (ri,2/pi + ri,1/pi + ri,2)/2 – (ri,1 + ri,2)

(14)

Algorithm 2 summarizes the role of the clusterhead in our predictive QoS control scheme. Algorithm 2a is triggered on the arrival of a data packet at the clusterhead; while Algorthim 2b is triggered on the end of every epoch to compute new control value for next epoch and maintain network traffic information record. event data_packet_received( ) { 1. Send back an ACK with the control value p; 2. Extract the timer-value TV sent along in the data packet, and decrease by 1 the mi of the entry corresponding to the epoch TV epochs before (with TV epochs in between); Also, increase the corresponding ri,1 by 1; } Algorithm 2a. Clusterhead Action on Data Packet Arrival

event epoch_ends( ) { 1. Record network traffic: add a new entry to record the control value and the number of packets received in the epoch that just ended; 2. Clean useless network traffic information: remove expired entries, whose lifetime (TUB) have ended; 3. If sensor failure is detected for a specific recorded entry, use two-step estimation to estimate mi based on ri,1 and ri,2 instead of using what’s recorded; 4. Approximate the probability distribution of the number of packets to be received the epoch e'', which follows the next epoch, based on recorded network traffic information; 5. Iteratively solve (13) to find its maximum thus determining the appropriate control value pe'. Set p, the control value to be sent to nodes, to pe'. 6. for each i : ri,2 := ri,1; ri,1 := 0; } Algorithm 2b. Clusterhead Action on Epoch Ends

Table 1 illustrates an example of network traffic information recorded at the clusterhead for the prediction purpose (see §3.2), the failure detection and two-step estimation of the number of potential sensors. In this example, Q = 40 and N = 70. This is the real-time network traffic information during the current epoch #1. Looking back at the epoch #3, a total of n3 = 41 data packet were received. All 41 epoch #3 sensors received the same control value (0.58), which comes from the solution of (13) in §3.2; and m3 = 12 of these sensors have not sent since. That is, there are no more than 12 potential sensors from the epoch #3 at current time. The current epoch #1 is ongoing, and r3,1 = 8 data packets have since been received from epoch #3. In last epoch (#2), r3,2 = 21 data packets were received from epoch #3 sensors. The entry for epoch #3 has a lifetime of TUB = 5. Thus the number of potential sensors m3 = 41 − 21 − 8 = 12. Also, note that the sum of the column ri,1 equals n1 = 34; and the sum of the column ri,2 equals n2 = 42. The value of TUB is derived from (3) in §3.1. epoch # (i) ni pi mi ri,2 ri,1 TUB 6 38 0.675 0 3 0 4 5 40 0.52 0 12 3 6 4 39 0.585 2 6 3 5 3 41 0.58 12 21 8 5 2 42 0.575 22 0 20 5 1 34 0.57 ? 0 0 5 Table 1. Example Network Traffic Information

4. Performance Experiments 70

70 predictive control bernoulli control

mean # of pkts received

60

predictive control bernoulli control

mean # of pkts received

60

50

40

30

20

10

0

0

10

20

30 40 desired QoS

50

60

70

Figure 3. Desired QoS vs. mean QoS obtained comparing with the Bernoulli benchmark 5 predictive control bernoulli control theoretical

4.5

standard deviation in QoS

Simulations were carried out to study the performance of the proposed PQC protocol, and compare it with the equal participating Bernoulli technique as the control benchmark [3]. In the first study, 70 sensors were deployed each of which has a lifetime of 1000 (battery energy allows sending 1000 data packets). In these simulations, we assume that a sensor will die if and only if it runs out of battery (no other failure mechanisms). We set the desired QoS to be 40 data packets every epoch. Simulations were repeated 100 times, and the mean of number of data packets received at each epoch is shown in Figure 2. It shows that PQC obtains the same mean QoS as the Bernoulli benchmark, and the network life of PQC approaches to that of the Bernoulli benchmark. However, in these simulations, the Bernoulli benchmark did not count the energy consumed in acquiring N and in continuous listening of all sensors to control messages; otherwise the lifetime of the Bernoulli benchmark will be much shorter than what we see.

4

3.5

3

2.5

50

2 40

1.5 30

0

10

20

30 40 desired QoS

50

60

70

20

Figure 4. Desired QoS vs. Variance in QoS initialing

10

with Bernoulli benchmark

0

0

200

400

600

800 1000 1200 epoch time

1400

1600

1800

2000

0.8 0.75

Bernoulli benchmark

0.7

Next, we have varied the desired QoS and repeated the study with the assumption that every sensor has infinite power. The results are given in Figures 3 and 4, which show that the mean and variance of QoS of PQC approach the Bernoulli benchmark under various QoS requests. As illustrated in Figure 5, if we initialize the transmission probability of all the sensors with 1, they will converge to the Bernoulli benchmark transmission probability, which is 40/70 ≈ 0.57 in approximately 10 epochs. Thus, the PQC quickly achieves near Bernoulli benchmark equality in sensor participation.

transmission probability of sensors

Figure 2. Network life-time comparing with the

0.65 0.6 0.55 0.5 0.45 0.4 0.35

0

5

10

15 epoch time

20

25

30

Figure 5. transmission probability (control value p) of sensors under PQC with 99% confidence interval

panic drop spike in PQC when N rises suddenly from 50 back to 140 as there was in [3]. 70 predictive control bernoulli control

60

mean # of pkts received

To measure the network life quantitatively, we define the metric k-β lifetime: we check the total number of data packets received during every epoch to see if it falls out of the 1−β confidence interval centered at Q; when we observe k consecutive fall-outs we presume the network life has ended since the first of these fallouts. Figure 6 shows that PQC achieves similar k-β network lifetime (k = 10, β = 0.0455, i.e., the 95.45% confidence interval) as the Bernoulli benchmark under various QoS requests in energy simulations (each sensors has a lifetime of 1000). 15000

50

40

30

20

predictive control bernoulli control

10

0

50

100

150 200 epoch time

250

300

350

10000 network lifetime

Figure 7. Responsiveness to change in desired QoS

5000

150

Number of sensors N

0

0

10

20

30 40 desired QoS

50

60

70

N 140 70 50

100

Var(Q) 6.75 1.96 0.30

Bernoulli Var 28 17 8

Figure 6. Network lifetime vs. desired QoS (k-β settings of k = 10 and β = 0.0455)

0

0

50

100

150

200

Epoch

250

300

350

400

140 predictive control bernoulli control total sensors deployed

120

mean # of pkts received

The following scenarios consider the cases where there is a change in QoS request or when there are large swings in the number of available sensors. In these simulations, we assume sensors have infinite power. The goal of these simulations is to study the response of PQC control protocol to the system and network dynamics. We first varied the desired QoS, but fixed the total number of sensors. Figure 7 shows that PQC adapts to the changes in QoS request very quickly. Comparing to the results from [3], PQC has a much better response. What’s more, there is no panic drop spike in PQC when QoS request is suddenly lowered. The Bernoulli benchmark protocol adapts to these changes immediately since it is assumed that all sensors are updated at each epoch. Figure 8 shows the response of PQC when N, the total number of sensors, varies and QoS request is fixed. Again, PQC has better response to changes in N than what is proposed in [3]. For example, when N drops from 140 to 70, the scheme proposed in [3] takes around 40 epochs to recover; while PQC only takes around 10 epochs to recover. What’s more, there is no

E(Q) 50

100

80

60

40

20

0

0

50

100

150 200 epoch time

250

300

350

Figure 8. Responsiveness in reacting to change in total number of sensors deployed. Top: Automaton from [3]. Bottom: PQC and Bernoulli benchmark

5. Conclusion Herein, we presented a predictive strategy for adjusting the transmission intervals in a wireless sensor network. While the control parameter is determined centrally based on the actual versus desired QoS, individual nodes have control over their particular activity. The methodology was shown to achieve performance similar to a Bernoulli benchmark in terms of network life, mean and variance in QoS and responsiveness to network dynamics. The key advantages of the technique are that it does not require explicit knowledge of the total number of network sensors, and that it does not require all sensors to listen to control information at each epoch; both of which are shortcomings of the Bernoulli benchmark method.

Reference [1] Meguerdichian, S., et al, “Coverage problems in wireless ad-hoc sensor networks,” Proceedings of IEEE INFOCOM 2001, Vol. 3, pp. 1380-1387. [2] Proceedings of the IEEE’s SNPA 2003, ACM’s SenSyS 2003/04, IEEE/ACM’s IPSN 2004/05, etc. [3] Kay, J. and Frolik, J., “An expedient wireless sensor automation with system scalability and efficiency benefits,” submitted to IEEE Trans. Systems, Man and Cybernetics, Part A, February 2005. [4] Iyer, R. and Kleinrock, L., “QoS Control for Sensor Netowks,” IEEE International Communications Conference (ICC 2003), Anchorage, AK, May 2003 [5] Frolik, J., “QoS Control for Wireless Sensor Networks,” Wireless Communication and Networking Conference (WCNC 2004), Atlanta, GA, March 2004 [6] Kay, J. and Frolik J., “Quality of Service analysis and control for wireless sensor networks,” 1st IEEE International Conference on Mobile Ad-hoc and Sensor Systems (MASS 2004), Ft. Lauderdale, FL., Oct. 2004