Stabilization and Disturbance Attenuation over a

1 downloads 0 Views 168KB Size Report
May 17, 2009 - on the disturbance attenuation achievable for a given channel signal-to-noise ratio, and find the smallest signal-to-noise ratio that is compatible ...
1

Stabilization and Disturbance Attenuation over a Gaussian Communication Channel J. S. Freudenberg, R. H. Middleton, and V. Solo

Abstract We consider the problem of stabilizing an unstable system driven by a Gaussian disturbance using a feedback signal transmitted over a memoryless Gaussian communication channel. By applying the concept of entropy power, we show that the mean square norm of the state vector must satisfy a lower bound that holds for any causal communication and control strategies. In addition, we show that use of nonlinear, time varying strategies does not allow stabilization over a channel with a lower signalto-noise ratio than that achievable with linear time invariant state feedback. Finally, we show that for scalar systems the lower bound on the mean square norm of the state is tight, and achievable using linear time invariant communication and control.

I. INTRODUCTION Consider the linear system xk+1 = Axk + Buk + vk ,

(1)

with state xk ∈ Rn , control uk ∈ R, and process disturbance vk ∈ Rn . We assume that x0 and vk are realizations1 of Gaussian random variables X0 and Vk , and that X0 , V0 , V1 , . . . are mutually independent. Assume that X0 and Vk are zero mean with covariance matrices Σx and Σk , respectively. J. S. Freudenberg is with the Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, Michigan, USA, [email protected]; R. H. Middleton is with the Hamilton Institute, The National University of Ireland Maynooth, Co Kildare, Ireland, [email protected]; V. Solo is with the School of Electrical Engineering and Telecommunications, The University of New South Wales, Sydney, Australia, [email protected] 1

We use upper case letters to denote random variables, lower case letters to denote realizations of these variables, subscripts

to denote elements of a sequence, and superscripts to denote subsequences, e.g. xk , {x0 , x1 , . . . , xk }. May 17, 2009

DRAFT

2

Suppose, as depicted in Figure 1, that we wish to stabilize the system (1) over an additive Gaussian noise channel, rk = sk + nk , where the channel input must satisfy the instantaneous power constraint E{Sk2 } ≤ P, and the Gaussian channel noise Nk is independent and identically distributed (i.i.d.) with zero mean and variance σn2 . The channel noise is also assumed to be independent of the initial state and process disturbance. The capacity of the channel is determined by its signal-to-noise ratio: C = (1/2) log (1 + P/σn2 ) bits/transmission [3]. The channel input at time k is given by sk , fk (xk ), and the control input by uk = gk (rk ), where fk and gk are Borel measurable. The system in Figure 1 is said to be mean square stable if supk E{kXk k2 } < ∞, where k · k denotes the Euclidean vector norm. The problems of stabilization and disturbance attenuation involve choosing communication and control strategies fk and gk to achieve mean square stability and to minimize supk E{kXk k2 }. In this paper we shall derive a lower bound on the disturbance attenuation achievable for a given channel signal-to-noise ratio, and find the smallest signal-to-noise ratio that is compatible with stabilization. vk xk+1 = Axk + Buk + vk

uk

rk gk(rk )

sk

Σ

fk(xk )

xk

nk

Fig. 1.

Stabilization over a Gaussian channel.

The authors of [2] consider the case in which the channel input is a constant state feedback, sk = −Kxk , the control input is equal to the channel output, uk = rk , and the process disturbance is absent, vk = 0. They seek a stabilizing state feedback gain for which the channel input asymptotically satisfies the power constraint. Such a gain exists if and only if the signal-toQ 2 noise ratio (SNR) of the channel satisfies the lower bound [2] P/σn2 > ( m i=1 |φi | ) − 1, where |φi | ≥ 1, i = 1, . . . , m denote the unstable eigenvalues of A. It follows that stabilization with state feedback requires a channel with capacity m X C> log |φi | bits/transmission.

(2)

i=1 May 17, 2009

DRAFT

3

If only a linear combination of states, yk = Cxk , is available, then dynamic output feedback may be used for stabilization. It is proven in [2] that the bound (2) remains necessary and sufficient for stabilization by linear time invariant output feedback provided that G(z) = C(zI − A)−1 B has relative degree one and is minimum phase; i.e., has no finite zeros outside the closed unit disk. If G(z) is not minimum phase or has relative degree greater than one, then it is shown in [2] that the signal-to-noise ratio required for stabilization with linear time-invariant control will be strictly greater than that achievable with state feedback. In another line of work, Nair and Evans [12] consider a noise-free channel with data rate at most R bits/transmission and a delay of d time steps. Using the concept of entropy power, they show under mild hypotheses that mean square stabilization is possible if and only if the P data rate satisfies R ≥ m i=1 log |φi | bits/transmission, where strict inequality is required if a persistent disturbance vk is present. Indeed, they prove in [12] that with such a disturbance, the mean square norm of the state becomes unbounded as the data rate approaches the minimum required for stabilization. The authors of [13] consider deterministic nonlinear plants with no disturbances, and derive a notion of information rate for such plants that is independent of the feedback channel over which they are to be stabilized. They show that this information rate is equal to the minimal data rate required for stabilization over a noiseless digital channel. A noisy channel with capacity C can communicate reliably at any rate below C by use of appropriate encoding that introduces transmission delay. It is thus plausible that the minimal data rate derived in [12] is identical to the minimal channel capacity (2). Nevertheless, the coding and control laws used in [12] are allowed to be nonlinear and time-varying, whereas the derivation of the minimal SNR used to compute (2) assumes linear time-invariant control [2]. We wish to determine whether use of nonlinear time-varying control will enable stabilization with a channel capacity smaller than (2). To do so, we derive a lower bound on the mean square norm of the state together with a necessary condition for this bound to be finite. The remainder of this paper is outlined as follows. In Section II we develop some properties of conditional entropy and entropy power. These are used in Section III to show that the lower bound on channel capacity (2) remains necessary even if nonlinear time-varying communication and control strategies are allowed. If the process disturbance vk is present in (1), then the mean square norm of the state must satisfy a lower bound whose magnitude depends on the proximity of the channel capacity to the minimum required for stabilization. Attempts to stabilize over a May 17, 2009

DRAFT

4

channel whose capacity is close to the minimum required by (2) will result in the mean square norm of the state becoming very large. In Section IV we show that for a scalar system the lower bound may be satisfied with equality by using linear time-invariant communication and control strategies. Conclusions and further directions are given in Section V. There is an extensive literature on feedback control over communication channels; a partial list includes [1], [2], [4], [10]–[13], [17], [18]. The results of the present paper are an application to Gaussian channels of the entropy power arguments that were applied in [12] to channels that are noise free but data rate limited. A preliminary version of the results appeared in [5]. We discuss the relation of our results to those of [1] and [17] at the close of Section IV.

II. PRELIMINARIES Our work is motivated by that described in [12] for noise-free digital channels and thus, for ease of reference, we adapt our mathematical framework and notation from that paper. All random variables are assumed to exist on a common probability space with measure P. The probability density of a random variable X in Euclidean space with respect to Lebesgue measure λ on the space is denoted by pX , and the probability density of X conditioned on the σ-field generated by the event Y = y by pX|y . Random variables are allowed to be vector-valued, and the dimension will be stated explicitly when required. Let the expectation operator be denoted E, and expectation conditioned on the event Y = y by Ey . We use “log” to denote the logarithm base two, and “ln” to denote the natural logarithm. The (differential) entropy of X is defined by H(X) , −E{log pX (X)}. Denote the conditional entropy of X given the event Y = y by Hy (X) , H(X|Y = y) = −E{log pX|y (X)}, and the random variable associated with Hy (X) by HY (X). The average conditional entropy of X given the event Y = y and averaged over Y is defined by H(X|Y ) , E{HY (X)}, and the average conditional entropy of X given the events Y = y and Z = z and averaged only over Y by Hz (X|Y ) , Ez {HY,Z (X)}. Given an n-dimensional random variable X with entropy H(X), the entropy power [15] of X is defined by N (X) , (1/2πe)e(2/n)H(X) . Denote the conditional entropy power of X given the event Y = y by Ny (X) , (1/2πe)e(2/n)Hy (X) , and the random variable associated with Ny (X) by NY (X). The average conditional entropy power of X given the event Y = y and averaged May 17, 2009

DRAFT

5

over Y is defined by N (X|Y ) , E{NY (X)}, and the average conditional entropy power of X given the events Y = y and Z = z and averaged only over Y by Nz (X|Y ) = Ez {NY,Z (X)}. The entropy power of a scalar random variable is a lower bound on its variance [3, Theorem 8.6.6], with equality if and only if the random variable is Gaussian. The following result generalizes this property to vector valued random variables, and is a tighter version of a bound stated in [12, eqn. (4.4)]. Proposition II.1 Let X be an n-dimensional random variable. Then Ny (X) ≤ (1/n)Ey {kXk2 }.

(3)

Proof: By the definition of Hy (X) and the translation invariance property of differential ˜y = X − entropy [3, Theorem 8.6.3], we have that Hy (X) = Hy (X − Ey {X}). Define X   ˜y X ˜T} . Ey {X}. Then it follows from [3, Theorem 8.6.5] that Hy (X) ≤ 21 ln (2πe)n det Ey {X y Substituting this bound on Hy (X) into the definition of Ny (X) and simplifying yields Ny (X) ≤ ˜y X ˜ T }1/n ≤ 1 tr Ey {X ˜y X ˜ T }1/n . The inequality [3, (17.120)] implies that det Ey {X ˜y X ˜ T }, det Ey {X y

y

n

y

˜ y k2 } and [7, p. 97] Ey {kX ˜ y k2 } ≤ ˜y X ˜ T } = Ey {kX and the result follows from the facts that tr Ey {X y Ey {kXk2 }. Lemma II.2 Let X be an n-dimensional random variable. Then the average conditional entropy power satisfies the lower bound Nz (X|Y ) ≥

1 (2/n)Hz (X|Y ) e . 2πe

(4)

Proof: By definition, we have that Nz (X|Y ) = Ez {NY,Z (X)}, where NY,Z (X) = (1/2πe)e(2/n)HY,Z (X) . Jensen’s inequality (cf. [3, p. 25]) implies that Ez {e(2/n)HY,Z (X) } ≥ e(2/n)Ez {HY,Z (X)} , and (4) follows from the definition of Hz (X|Y ). Given random variables X and Y that are mutually independent when conditioned on the event Z = z, the conditional entropy power inequality [12, eqn. (4.8)] is given by Nz (X + Y ) ≥ Nz (X) + Nz (Y ). May 17, 2009

(5) DRAFT

6

The conditional mutual information between random variables X and Y given the event Z = z is defined by Iz (X; Y ) = Hz (X) − Hz (X|Y ).

(6)

By symmetry (6) holds with the roles of X and Y interchanged. Lemma II.3 Let f (X) be a scalar-valued function of the potentially vector-valued random variable X, and let Y = f (X) + N , where N is a zero mean Gaussian random variable with variance σn2 . Assume that X and N are independent when conditioned on the event Z = z, and that N is independent of Z. Assume further that E{f (X)2 } ≤ P. Then   1 P Iz (X; Y ) ≤ log 1 + 2 . 2 σn

(7)

Proof: It follows from (6) that Iz (f (X); Y ) = Hz (Y ) − Hz (Y |f (X)). The definition of Y together with the translation invariance property [3, Theorem 8.6.3] implies that Hz (Y |f (X)) = Hz (f (X) + N |f (X)) = Hz (N ). Independence of N and Z implies that Hz (N ) = H(N ), and the fact that N is Gaussian yields [3, eqn. (8.9)] Hz (Y |f (X)) = (1/2) log 2πeσn2 .

(8)

Translation invariance further implies that Hz (Y ) = Hz (Y − Ez {Y }), and thus Hz (Y ) ≤ (1/2) log 2πeE{(Y − Ez {Y })2 } ≤ (1/2) log 2πeE{Y 2 },

(9)

where the first inequality follows from [3, Theorem 8.6.5], and the second from the fact that the conditional estimate minimizes the mean square estimation error [7, p. 97]. Conditional independence of X and N implies that E{Y 2 } = E{f (X)2 } + E{N 2 }, and thus E{Y 2 } ≤ P + σn2 .

(10)

Combining (8)-(10) yields Iz (f (X); Y ) ≤ (1/2) log (1 + P/σn2 ). Noting that the relation X → f (X) → Y is a Markov chain, the data processing inequality [3, Theorem 2.8.1] implies that Iz (X; Y ) ≤ Iz (f (X); Y ), and (7) follows.

May 17, 2009

DRAFT

7

III. A LOWER BOUND ON DISTURBANCE RESPONSE We now derive a lower bound on the mean square norm of the state when a disturbance is present. In doing so, we also derive a necessary condition on the channel capacity required for stabilization. Our results depend only upon the unstable eigenvalues of the system (1), and hence we consider the decomposition          u u u u vu B A 0 x x   k  +   uk +  k  ,  k+1  =  vks Bs xsk xsk+1 0 As

(11)

where the eigenvalues of Au are precisely the m unstable eigenvalues of A that satisfy |φi | ≥ 1. Let Σux denote the covariance of the unstable initial state X0u and let Σuk denote the covariance of the disturbance Vku . The entropy of Vku is given by [3] H(Vku ) = (m/2) ln 2πe + (1/2) ln det Σuk , and thus the entropy power of Vku satisfies N (Vku ) = βk , (det Σuk )1/m .

(12)

We assume there exists β > 0 such that βk ≥ β, ∀k. Proposition III.1 Assume that the feedback system in Figure 1 is mean square stable. Then necessarily the capacity of the additive noise channel must satisfy C>

m X

log |φi | bits/transmission.

(13)

i=1

Furthermore, if the bound (13) is satisfied, the mean square norm of the state satisfies the lower bound sup E{kXku k2 } ≥ mβ k

 Qm 1−

2 i=1 |φi | 1 + P/σn2

1/m !−1 .

(14)

It follows from Proposition III.1 that use of nonlinear or time-varying control does not allow one to stabilize over a channel with lower capacity, or SNR, than that required for linear timeinvariant state feedback, or output feedback from a minimum phase relative degree one system. Moreover, a system stabilized over a channel whose capacity is close to the minimum (13) will be very sensitive to the presence of a process disturbance, in the sense that such a disturbance will cause the mean square norm of the state to become very large.

May 17, 2009

DRAFT

8

Proof of Proposition III.1 Denote the average conditional entropy power of the unstable state dynamics given the channel output sequences rk−1 and rk by nk|k−1 = E{NRk−1 (Xku )} and nk|k = E{NRk (Xku )}, respectively. We now derive a lower bound on the reduction in conditional entropy power following each channel output. Lemma III.2  nk|k ≥

σn2 P + σn2

1/m nk|k−1

(15)

Proof: The facts that E{X} = E{EY {X}} (cf. [16, p. 123]) and that Rk = {Rk−1 , Rk } imply that nk|k = E{ERk−1 {NRk−1 ,Rk (Xku )}}, and thus the definition of Nz (X|Y ) yields nk|k = E{NRk−1 (Xku |Rk )}. Together, inequality (4) and the identity (6) imply that Nrk−1 (Xku |Rk ) ≥

1 (2/m)(Hrk−1 (Xku )−Irk−1 (Xku ;Rk )) . e 2πe

Lemma II.3 implies that Irk−1 (Xku ; Rk ) ≤ C. This fact and the definition of Nrk−1 (Xku ) yield Nrk−1 (Xku |Rk ) ≥ Nrk−1 (Xku )e−(2/m)C . Taking expectations yields E{NRk−1 (Xku |Rk )} ≥ E{NRk−1 (Xku )}σn2 /(P + σn2 ),

(16)

and (15) follows from the definitions of nk|k−1 and nk|k . Next we derive a lower bound on the increase in conditional entropy power at each time step due to the unstable system dynamics and the process disturbance. Lemma III.3 nk+1|k ≥

m Y

!1/m |φi |2

nk|k + N (Vku ).

(17)

i=1

Proof: The state decomposition (11) together with the translation invariance property [3, u Theorem 8.6.3] imply that Nrk (Xk+1 ) = Nrk (Au Xku + Vku ). Applying the conditional entropy u power inequality (5) yields Nrk (Xk+1 ) ≥ Nrk (Au Xku ) + Nrk (Vku ). Finally, [3, eqn. (8.71)] and u the fact that Vk , Vj are independent for k 6= j gives Nrk (Xk+1 ) ≥ | det Au |2/m Nrk (Xku )+N (Vku ). u Hence E{NRk (Xk+1 )} ≥ E{NRk (Xku )} + N (Vku ), and (17) follows from the definitions of nk|k−1

and nk|k . To complete the proof, we combine (15) and (17) to yield nk+1|k ≥ γnk|k−1 + βk , May 17, 2009

(18) DRAFT

9

where γ,

m Y

2

!1/m 

|φi |

i=1

σn2 P + σn2

1/m .

(19)

It follows from the recursion (18) that nk+1|k ≥ γ

k+1

n0|−1 +

k X

βj γ k−j ,

(20)

j=0

where n0|−1 = (det Σux )1/m and βk is given by (12). Since βk is bounded below by β > 0, the condition γ < 1 is necessary for the sequence nk+1|k to remain bounded as k → ∞. By definition, n0|−1 ≥ 0 and thus nk|k−1 ≥ β(1 − γ k )/(1 − γ). The bound (3) implies that   1 E{kXk k2 } ≥ mβ(1 − γ k )/(1 − γ), and thus supk E{kXku k2 } ≥ mβ 1−γ . It follows that the condition γ < 1 is also necessary for mean square stability. Remark on the Scalar Case A careful inspection of the proof of Proposition III.1 shows that a version of the bound (14) may be derived that depends on all the eigenvalues of A, namely,   −1 2 2 2 1/n sup E{kXk k } ≥ nβ 1 − | det A| /(1 + P/σn ) .

(21)

k

For a system with both stable and unstable dynamics such a bound would be weaker than that obtained by considering only the unstable eigenvalues. Suppose, however, that the system (1) is scalar and of the form xk+1 = Axk + uk + vk ,

(22)

where vk is zero mean Gaussian and i.i.d. with variance σv2 . Then (21) simplifies to sup E{Xk2 } ≥ k

σv2 , 1 − A2 /(1 + P/σn2 )

(23)

and holds regardless of whether or not the system is unstable. IV. L INEAR C OMMUNICATION AND C ONTROL S UFFICIENT FOR A S CALAR P LANT In general, one would not expect the lower bound (14) to be tight, because results used in its derivation are known to be tight only in special cases. These include Proposition II.1, whose tightness is discussed in [12], and the entropy power inequality (5). The authors of [14, Section II.B] show that a bound applicable to data rate limited channels that is similar to (14) May 17, 2009

DRAFT

10

is tight for scalar systems. In this section we consider the scalar system (22), and show that the corresponding bound (23) is also tight, and may be achieved using linear time-invariant communication and control strategies. We will suppose that the channel input is a constant scalar multiple of the state sk = λxk , where λ is a parameter to be chosen so that the power constraint is satisfied with equality. For a specific value of λ, consider the problem of minimizing the cost function kX 1 −1 1 E{Xk2 }, Jx (λ) = lim k1 →∞ k1 − k0 k=k k0 →−∞

(24)

0

using a control law that depends on the channel output sequence. It is well known [8] that the optimal control is a stabilizing linear time-invariant state feedback uk = −Kc xˆk|k , where xˆk|k is obtained from the filtering version of the optimal estimator xˆk+1|k = Aˆ xk|k−1 + uk + Kp (rk − λˆ xk|k−1 ), xˆk|k = xˆk|k−1 + Kf (rk − λˆ xk|k−1 ), with Kp = AKf , Kf = λΣλ /(λ2 Σλ + σ 2 ), and Σλ the nonnegative solution to the algebraic Riccati equation  Σλ = A2 Σλ − A2 λ2 Σ2λ / λ2 Σλ + σn2 + σv2 .

(25)

The solution to the Riccati equation (25) is equal to the variance of the optimal prediction ˜ 2 } = Σλ . Stability and time-invariance estimation error x˜k|k−1 = xk − xˆk|k−1 , namely, E{X k|k−1 ˜ 2 }. The imply that all signal distributions are stationary, and thus (24) reduces to Jx (λ) = E{X k

fact that the control input is not penalized in the cost (24) implies that the state feedback gain has the special form [9, eqn. (16)] Kc = A. We now state a simple expression for the optimal value of the cost (24) as a function of the design parameter λ. Lemma IV.1 The mean square value of the state under optimal control is equal to the variance of the optimal prediction estimation error: Jx (λ) = E{Xk2 } = Σλ . Proof: Combining the state equations for xˆk+1|k and xˆk|k , and using the fact that Kc = A yields2 xˆk+1|k = 0, and thus xk+1 = x˜k+1|k . Our next result shows that choosing λ to satisfy the power constraint with equality results in the lower bound (23) also being achieved with equality. 2

The fact that the optimal prediction estimate is equal to zero appears in the literature on minimum variance control [6].

May 17, 2009

DRAFT

11

Proposition IV.2 Assume that  λ2 = P 1 − A2 /(1 + P/σn2 ) /σv2 .

(26)

Then under optimal control the channel input and plant state satisfy E{Sk2 } = P and  E{Xk2 } = σv2 / 1 − A2 /(1 + P/σn2 ) .

(27)

Proof: For a given value of λ, the variance of the channel input satisfies E{Sk2 } = λ2 Σλ , ¯ λ , λ2 Σλ . Then where Σλ satisfies (25). Multiply both sides of (25) by λ2 , and define Σ  ¯ λ = A2 Σ ¯ λ − A2 Σ ¯ 2λ / Σ ¯ λ + σn2 + λ2 σv2 . Σ

(28)

¯ λ = (A2 − 1)σ 2 and limλ→∞ Σ ¯λ = The nonnegative solution to (28) has limiting values limλ→0 Σ n ∞. Hence, provided that the power limit satisfies the lower bound required for stabilization, it follows that there exists a value of λ for which the channel input satisfies the power limit with ¯ λ = P in (28) and solving for λ2 yields (26). The mean square value of the equality. Setting Σ ¯ λ /λ2 = P/λ2 and is given by (27). state is given by Σ We have shown that, for a scalar dynamical system, the theoretical lower bound on the mean square norm of the state (23) can be achieved using linear time-invariant communication and control strategies. There are connections between our results and others appearing in the literature. It is possible to obtain Proposition IV.2 by modifying results by Bansal and Bas¸ar, specifically by deriving an infinite horizon version of Theorem 2 from [1]. The authors of [17, Section V. B.] derive an expression for the optimal cost identical to (27); however, they assume that the channel input is allowed to depend on more information than we do in our problem statement. V. CONCLUSIONS AND FURTHER DIRECTIONS We have used the concept of entropy power to derive a lower bound on the mean square norm of an unstable linear system driven by a Gaussian disturbance, and stabilized over a Gaussian communication channel. Our results show that use of nonlinear time-varying communication and control strategies does not allow stabilization with a lower signal-to-noise ratio than that required to stabilize with constant linear state feedback. We also show that, for a scalar system, the lower bound on the mean square state norm can be achieved using linear time invariant communication and control strategies.

May 17, 2009

DRAFT

12

R EFERENCES [1] R. Bansal and T. Bas¸ar. Simultaneous design of measurement and control strategies for stochastic systems with feedback. Automatica, 25(5):679–694, 1989. [2] J. H. Braslavsky, R. H. Middleton, and J. S. Freudenberg. Feedback stabilization over signal-to-noise ratio constrained channels. IEEE Transactions on Automatic Control, 52(8):1391–1403, August 2007. [3] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley and Sons, New York, 2nd edition, 2006. [4] N. Elia. When Bode meets Shannon: Control-oriented feedback communication schemes. IEEE Transactions on Automatic Control, 49(9):1477–1488, September 2004. [5] J. S. Freudenberg, R. H. Middleton, and V. Solo. The minimal signal-to-noise ratio required to stabilize over a noisy channel. In Proceedings of the 2006 American Control Conference, pages 650–655, June 2006. [6] G. C. Goodwin and K. S. Sin. Adaptive Filtering Prediction and Conrol. Prentice Hall, Englewood Cliffs, NJ, 1984. [7] P. R. Kumar and P. Varaiya. Stochastic Systems: Estimation, Identification, and Adaptive Control. Prentice-Hall, 1986. [8] H. Kwakernaak and R. Sivan. Linear Optimal Control Systems. Wiley-Interscience, 1972. [9] J. M. Maciejowski. Asymptotic recovery for discrete-time systems. IEEE Transactions on Automatic Control, AC30(6):602–605, June 1985. [10] N. C. Martins and M. A. Dahleh. Feedback control in the presence of noisy channels: “Bode-like” fundamental limitations of performance. IEEE Transactions on Automatic Control, to appear, 2008. [11] A. S. Matveev and A. V. Savkin. The problem of LQG optimal control via a limited capacity communication channel. Systems and Control Letters, 53(1):51–64, 2004. [12] G. N. Nair and R. J. Evans. Stabilizability of stochastic linear systems with finite feedback data rates. SIAM Journal of Control and Optimization, 43(2):413–436, July 2004. [13] G. N. Nair, R. J. Evans, I. M. Y. Mareels, and W. Moran. Topological feedback entropy and nonlinear stabilization. IEEE Transactions on Automatic Control, 49(9):1585–1597, September 2004. [14] G. N. Nair, F. Fagnani, S. Zampieri, and R. J. Evans. Feedback control under data rate constraints: An overview. Proceedings of the IEEE, 95(1):108–137, January 2007. [15] C. E. Shannon. A mathematical theory of communication. The Bell System Technical Journal, 27:379–423, 623–656, July, October 1948. [16] H. Stark and J. W. Woods. Probability, Random Processes, and Estimation Theory for Engineers. Prentice Hall, Englewood Cliffs, NJ, 1986. [17] S. Tatikonda, A. Sahai, and S. M. Mitter. Stochastic linear control over a communication channel. IEEE Transactions on Automatic Control, 49(9):1549–1561, September 2004. [18] A. S. Y¨uksel and T. Bas¸ar. Optimal signaling policies for decentralized multicontroller stabilizability over communication channels. IEEE Transactions on Automatic Control, 52(10):1969–1974, October 2007.

May 17, 2009

DRAFT