Trajectory planning under a Stochastic Uncertainty Ulf T. J¨ onsson Optimization and Systems Theory Royal Institute of Technology SE-100 44 Stockholm, Sweden

Clyde Martin Department of Mathematics and Statistics Texas Tech University Lubbock, Texas, USA Yishao Zhou Department of Mathematics Stockholm University SE-106 91 Stockholm, Sweden

Abstract A trajectory planning problem for linear stochastic diﬀerential equations is considered in this paper. The aim is to control the system such that the expected value of the output interpolates given points at given times while the variance and an integral cost of the control eﬀort are minimized. The solution is obtained by using the dynamic programming technique, which allows for several possible generalizations that are also considered. The results of this paper can be used for control of systems with a stochastic growth rate. This is frequently the case in investment problems and in biomathematics.

1

Introduction

We are concerned with controlling the state or some measured quantity of the state to given values at discrete times. That is given a system of the form x˙ = f (x) + u1 g1 (x) + · · · + uk gk (x) and y = Cx where x ∈ Rn and y ∈ Rp we look for a system of controls that drive the output through or close to a prescribed sequence of set points {y(ti ) = αi : i = 1, · · · , N }. In this paper we specialize this problem to one in which the system is linear but the evolution equation is stochastic. Such systems occur in a variety of settings. We consider a rather complete example of a model that contains variable interest rates in Section 3. A second example, considered in Section 4, is trajectory planning for a feedback linearized mobile robot. Here the combined eﬀect of surface irregularities, friction, and other disturbances is modeled as a multiplicative stochastic disturbance. Another set of problems occurs in biomathematics when considering the chemostat. There are extensive references on physical, biological and mathematical background of the chemostat available in the literature, see for example [5]. In many situations the nutrient content of 1

the chemostat is not deterministic and a better model would be obtained by replacing the linear growth rate of the standard models by a random variable. This variable reﬂects the rate at which the nutrient is consumed and/or naturally degrades. This rate is a function of temperature and many other variables that are not modelled and hence could and should be taken to be stochastic. It is beyond the scope of this paper to consider a complete analysis of such systems but a model in this form ﬁts the theory described in this paper. The speciﬁc topic of this paper is the problem of designing a feedback control for a linear stochastic diﬀerential equation in such a way that the mean value of a speciﬁc output interpolates given points at given times at the same time as the variance and the expected value of an integral quadratic cost along the trajectory are minimized. By minimizing the variance at the interpolation points we minimize eﬀect of uncertainty due to the diﬀusion term in the system equation. By using a new notation we obtain compact formulas for the design equations. The problem is related to several methods based on optimal control for interpolating splines that recently have been proposed in the literature, see for example [4, 3, 12]. Apart from considering stochastic models, this paper distinguishes itself by using a dynamic programming approach which leads to feedback solutions of the corresponding control problem. In Section 5 we discuss a generalization, where we optimize over a ﬁnite set of possible interpolation points at each time instance. This problem reminds of the standard shortest path problem, see e.g. [1], but is complicated by the dynamic nature which make the cost corresponding to each node dependent on future interpolation points as well as the initial condition. The complexity of this problem can be reduced by considering suboptimal techniques such as discretization or tree pruning. We brieﬂy review some related work in [2, 6, 8, 9].

2

Stochastic Trajectory Planning

We consider a path planning problem where a stochastic disturbance enters the diﬀerential equation. As our basic problem we consider J0 (x0 ) =

min

u∈M(t0 ,tN )

subject to

N −1

wk+1 Var {CX(tk+1 )} + E 2

k=0

t0 ,x0

dX = (AX + Bu)dt + G(X)dZ, E t0 ,x0 {CX(tk+1 )} = αk+1 ,

tN

|u| dt 2

0

(2.1) X(t0 ) = x0

k = 0, . . . , N − 1

where wk ≥ 0 and (A, B) is a controllable pair. The last term in the stochastic diﬀerential equation corresponds to a multiplicative noise on the state vector deﬁned by a m-dimensional Brownian motion Zt . In other words, Zt is a process with zero mean and independent increments, i.e. E{dZt dZsT } = δ(t − s)Idt, where δ denotes the dirac function. We assume that G(x) is a linear function on the form n xi Gi G(x) = i=1

Rn×m .

where Gk ∈ In (2.1), is the expectation operator given that X(t0 ) = x0 and 2 t ,x t ,x 0 0 0 Var {CX(tk )} = E {(CX(tk ) − E 0 {CX(tk )})2 }. E t0 ,x0

2

We consider optimization over all Markov controls, i.e. feedback controls on the form u(t, ω) = µ(t, X(t, ω)). We let M(t0 , tN ) denote the set of Markov controls on an interval (t0 , tN ). It can be shown that our optimal solution also is optimal over all F t adapted processes, where Ft denotes the σ algebra generated by Zs for s ≤ t, see e.g. [10]. It turns out that there will be linear and constant terms in the value function due to the variance term in the objective function. It is therefore no essential addition in complexity to consider a more general path planning problem, where we allow the dynamics to be time-varying and diﬀerent from stage to stage. We also consider integral costs that penalizes the state vector. This gives the following generalization of (2.1). N −1 tk+1 J0 (x0 ) = min E t0 ,x0 σk (t, X, u)dt wk+1 |C1 X(tk+1 ) − βk+1 |2 + u∈M(t0 ,tN )

subject to

tk

k=0

dX = (Ak (t)X + Bk (t)u)dt + Gk (t, X)dZ, t ∈ [tk , tk+1 ], X(t0 ) = x0 E t0 ,x0 {C2 X(tk+1 )} = αk+1 ,

k = 0, . . . , N − 1 (2.2)

where σk (t, x, u) = xT Qk (t)x + 2xT Sk (t)u + uT Rk (t)u + 2qk (t)T x + 2rk (t)T u + k (t) and Gk (t, x) =

n

xi Gk,i (t)

i=1

and where everything else is deﬁned in a similar way as in (2.1). Here Ak , Bk , Qk , . . . , ρk and Gk,i are piecewise continuous functions of time and all pairs (Ak (t), Bk (t)) are assumed to be completely controllable and the cost is strictly convex, which is the case if for all t ∈ [tk , tk+1 ], k = 0, . . . , N −1, we have

Qk (t) Sk (t) ≥0 (2.3) Rk (t) > 0 and Sk (t)T Rk (t) Note that if C1 = C2 = C and αk = βk then E t0 ,x0 |CX(tk ) − βk |2 = Var2 {CX(tk )}. Let us deﬁne the cost-to-go functions N −1 ti+1 Jk (x) = min E tk ,x σi (t, X, u)dt wi+1 |C1 X(ti+1 ) − βi+1 |2 + u∈M(tk ,tN )

subject to

ti

i=k

dX = (Ai (t)X + Bi (t)u)dt + Gi (t, X)dZ, t ∈ [ti , ti+1 ], X(tk ) = x E t0 ,x0 {C2 X(ti+1 )} = αi+1 ,

i = k, . . . , N − 1

JN (x) =0 (2.4) Due to the Markov property of the stochastic diﬀerential equation we can use dynamic programming to solve (2.1). We next state two propositions and then the main result that solves (2.1). The ﬁrst states the dynamic programming recursion. 3

Proposition 2.1. The optimal cost satisﬁes the following dynamic programming equation. tk+1 tk ,x 2 Jk (x) = min E σk (t, X, u)dt + wk+1 |C1 X(tk+1 ) − βk+1 | + Jk+1 (X(tk+1 )) u∈M(tk ,tk+1 )

subject to

tk

dX = (Ak (t)X + Bk (t)u)dt + Gk (t, X)dZ, E tk ,x {C2 X(tk+1 )}

X(tk ) = x

= αk+1

JN (x) = 0 The next proposition states the solution to the following stochastic optimal control problem. It shows that Jk (x) is a quadratic function which is instrumental in solving the dynamic programming iteration. tf t0 ,x0 T T V (x0 , α, t0 , tf ) = min E σ(t, X(t), u(t))dt + X(tf ) Q0 X(tf ) + 2q0 X(tf ) + 0 u∈M(t0 ,tf )

subject to

t0

dX = (A(t)X + B(t)u)dt + G(t, X)dZ,

X(t0 ) = x0

E t0 ,x0 {CX(tf )} = α (2.5)

where σ(t, x, u) = xT Q(t)x + 2xT S(t)u + uT R(t)u + 2q(t)T x + 2r(t)T u + (t) and where Q, R, and S satisﬁes the conditions in (2.3). Proposition 2.2. We have V (x0 , α, t0 , tf ) = xT0 P (t0 )x0 + 2p(t0 )T x0 + ρ(t0 ) + (N (t0 )T x0 + m(t0 ))T W (t0 )−1 (N (t0 )T x0 + m(t0 )) where P˙ + AT P + P A + Q + Π(P ) = (P B + S)R−1 (P B + S)T , N˙ + (A − BR−1 (B T P + S T ))T N = 0, N (tf ) = C T p˙ + (A − BR−1 (B T P + S T ))T p + q = (P B + S)R−1 r, ˙ + N BR−1 B T N = 0, W (tf ) = 0 W m ˙ = N T BR−1 (B T p + r),

P (tf ) = Q0

p(tf ) = q0

(2.6)

m(tf ) = −α

ρ˙ + = (r + B T p)T R−1 (r + B T p),

ρ(tf ) = 0

and where Π(P ) is a linear matrix function with elements Π(P )k,l = control is

1 T 2 tr(Gk P Gl ).

The optimal

u∗ = −R−1 (B T P + S T )X − R−1 B T N v − R−1 (B T p + r) with v = −W (t0 )−1 (N (t0 )T x0 + m(t0 )). Remark 2.1. If Q, R and S satisﬁes a condition analogous to (2.3), then the linearly perturbed Riccati equation in (2.6) has an absolutely continuous unique positive semideﬁnite solution [11]. All other diﬀerential equations in (2.6) are linear with bounded piecewise continuous coeﬃcients, which ensure existence of unique absolutely continuous solutions. Note also that we have W (t) > 0 for t ∈ [t0 , tf ) by the complete controllability of (A(t), B(t)). 4

Proof. The proof is done by Lagrangian relaxation of the linear constraint. See Appendix A for the complete details. In order to obtain a compact notation we introduce

Q q S q Q 0 0

=

0 = , Q , S = T , Q T T q q 0 0 r

P p = W, R

=R

= N , W P = T , N p ρ mT

A 0

= B , C(α)

= = C −α , , B A 0 0 0

G(t, x) Π(P ) 0 ˆ x) = ˆ P ) = G(t, , Π( . 0 0 0

= XT If we ﬁnally let X written V (x0 , α, t0 , tf ) =

min

u∈M(t0 ,tf )

T

1

and x

0 = xT0

E

subject to

t0 , x0

tf

T

1

(2.7)

then the optimization problem (2.5) can be

T Q

X

+ 2X

T Su

+ uT Ru]dt

f )T Q

0 X(t

f) [X + X(t

t0

= (A(t)

X

+ B(t)u)dt

X)dZ, dX + G(t,

X(t

f )} = 0 E t0 , x0 {C(α)

0) = x

0 X(t

(2.8)

and the optimal solution can be written

(t0 )W (t0 )−1 N

(t0 )T x

T0 P (t0 ) + N

0 V (x0 , α, t0 , tf ) = x where ˙

T P + P A

+ S)

T,

+Q

+ Π( ˆ P ) = (P B

+ S)

R

−1 (P B P + A ˙

T P + S T ))T N

T

= 0, N

(tf ) = C(α)

−B

R

−1 (B N + (A ˙

T B

R

−1 B

T N

= 0, W (tf ) = 0 W +N

0 P (tf ) = Q

The optimal control is

−1 (B

T P + S)

x−R

−1 B

T N

W (t0 )−1 N

(t0 )T x u∗ = −R

0

−1 (B

T (P + N

W −1 N

T ) + S T )X. There is an equivalent feedback form given as u∗ = −R We can now state the solution of the general stochastic trajectory planning problem in (2.2). Proposition 2.3. Consider the optimal control problem in (2.2), where the condition (2.3) holds. The optimal Markov control in each time interval [tk , tk+1 ] is (all variables are deﬁned analogously with (2.7))

k (t)−1 (B

k (t)T P k (t) + S k (t)T )X(t)

−R

k (t)−1 B k (tk )−1 N

k (t)T N

k (t)W

k (tk )T X(t

k) u∗ (t) = −R 5

where for k = N − 1, . . . , 0 ˙

T P k + P k A

k + Π ˆ k (P k ) = (P k B

−1 (P k B

k + Q

k + S k )R

k + S k )T , P k + A k k ˙

−1 T

T T

k (tk+1 ) = C

2 (αk+1 )T ,

N N k + (Ak − Bk Rk (Bk Pk + Sk )) Nk = 0, ˙

T −1 T k (tk+1 ) = 0, W W k + Nk Bk Rk Bk Nk = 0,

1 (βN )T C

1 (βN ) and for k = N − 2, N − 3, . . . , 0 and where P N −1 (tN ) = wN C

k+1 (tk+1 )W k+1 (tk+1 )−1 N

k+1 (tk+1 )T + wk+1 C

1 (βk+1 )T C

1 (βk+1 ). P k (tk+1 ) = P k+1 (tk+1 ) + N The cost-to-go is

k (tk )W k (tk )−1 N

k (tk )T x

.

T P k (tk ) + N Jk (x) = x Proof. By dynamic programming. The formulation of Proposition 2.3 in the compact notation (2.7) gives appealing formulas but in a numerical implementation it is more eﬃcient to perform computations using a system on the form (2.6). For the basic trajectory planning in (2.1) this reduces to the following result Corollary 2.1. Consider the optimal control problem in (2.1), where the pair (A, B) is controllable. The optimal Markov control in each time interval [tk , tk+1 ] is u∗ (t) = −R−1 B T Pk (t)X(t) − R−1 B T Nk (t)vk − R−1 B T pk (t) with vk = −Wk (tk )−1 (Nk (tk )T X(tk ) + mk (tk )), where P˙k + AT Pk + Pk A + Π(Pk ) = Pk BR−1 B T Pk , N˙ k + (A − BR−1 B T Pk )T Nk = 0, p˙k + (A − BR−1 B T Pk )T pk = 0, ˙ k + N T BR−1 B T Nk = 0, W

(2.9)

k m ˙ k = NkT BR−1 B T pk , ρ˙ k = pTk BR−1 B T pk ,

where Π(Pk ) is a linear matrix function with elements Π(P )k,l = 12 tr(GTk P Gl ) and the boundary conditions are Pk (tk+1 ) = Pk+1 (tk+1 ) + Nk+1 (tk+1 )Wk+1 (tk+1 )−1 Nk+1 (tk+1 )T + wk+1 C T C T CT pk (tk+1 ) = pk+1 (tk+1 ) + Nk+1 (tk+1 )Wk+1 (tk+1 )−1 mk+1 (tk+1 ) + wk+1 αk+1 T ρk (tk+1 ) = ρk+1 (tk+1 ) + mk+1 (tk+1 )T Wk+1 (tk+1 )−1 mk+1 (tk+1 ) + wk+1 αk+1 αk+1

and Nk (tk+1 ) = C T , mk (tk+1 ) = −αk+1 and Wk (tk+1 ) = 0. The optimal cost-to-go is Jk (x) = xT Pk (tk )x + 2pk (tk )T x + ρk (tk ) + (Nk (tk )T x + mk (tk ))T Wk (tk )−1 (Nk (tk )T x + mk (tk )) 6

3

A Homeowners Investment Problem

We consider a very simple problem arising in portfolio optimization. For most people their home is the most dominant feature of the portfolio. In this example we consider the problem of controlling the investment in the home when interest rates ﬂuctuate and inﬂation, either positive or negative, occurs. In this example we are aware that in addition to making monthly payments the homeowner will usually make improvements to the home thus increasing the value over and above the increase caused by inﬂation, some features of the home deprecate lowering the value of the home and the homeowner has the option to borrow against the value of the home at several points during the course of ownership. We model the value of the home in terms of two linear stochastic diﬀerential equations and allow the noise to enter as a multiple of the state. The variables are categorized in Table 3.

P (t)

is the principal remaining on the loan at time t

P0

initial purchase price

V (t)

the actual market value of the home at time t

u1 (t)

the amount of cash either paid toward the principal and interest or withdrawn from the home in terms of equity loans

u2 (t)

money applied to the value of the house in terms of improvements or taken form the value of the house through the sale of assets

dZ(t)

a Brownian motion processes representing inﬂation and ﬂuctuation in interest

i

nominal annual interest rate

Table 1: Variables We begin by considering the discrete time case. The fundamental law of payments can be expressed as the principal at time n + 1 is equal to the principal at time n plus interest drawn during the time period plus or minus the payment, i.e., Pn+1 = (1 + in )Pn ± Qn where Pn denotes principal, in denotes the interest in the nth period and Qn denotes the nth payment. Now using the notation from the table we have P (t + h) = P (t) + h(i(t)P (t) − u1 (t) + u2 (t)), P (0) = P0

(3.10)

where u1 (t) is rate of payment. The control u2 (t) represents money borrowed from the bank to make improvements. This variable enters into the equation for the value of the house. We divide 7

the interest rate into two parts. The ﬁrst is a nominal rate and the second is ”random” motion of the rate. For the sake of simplicity we assume that this rate is equal to the rate of inﬂation. We can think of i(t) = i + Z(t + h) − Z(t) where Z(t + h) − Z(t) is an increment of Brownian motion. The second part of the model is the market value of the home. Here we have V (t + h) = V (t) + hu2 (t) + (Z(t + h) − Z(t))V (t), V (0) = P0

(3.11)

We allow both u1 and u2 to be either positive or negative. If u1 is negative we consider this to be money that is withdrawn from the value of the home as an equity loan. If u2 is negative we consider this to be money that is achieved through the sale of assets associated with home. Any such money will be assumed to be applied to the principal. This is not a real restriction since we can immediately borrow this money in terms of an equity loan. We can pass to the limit to obtain a continuous time model. For the continuation of this model we take the following equations describing the evolution of the value of the home. dP (t) = iP (t)dt + (u2 (t) − u1 (t))dt + P (t)dZ(t), P (0) = P0

(3.12)

dV (t) = V (t)dZ(t) + u2 (t)dt, V (0) = P0

(3.13)

In the ﬁrst equation we see that dZ(t) can denote the variation that occurs in interest rates. This is of course an approximation of reality. It is unlikely that interest rates would ever become negative and in most home loans there is a provision to bound from above the interest rate. The incorporation of these bounds complicates the problem and obscures the application that we wish to exhibit. In the second equation dZ(t) denotes the eﬀect of inﬂation on the value of the home. At one time it would not have seemed reasonable to have negative inﬂation. However during the 1990s and into the beginning of the 21 century we have seen negative inﬂation in Japan. To use the same rate for inﬂation and interest ﬂuctuation is not entirely correct but for the purpose of an example to support the theory developed in this paper it is adequate. The variable of primary interest is a linear combination of P (t) and V (t). We let Y (t) = V (t) − P (t)

(3.14)

this variable represents the cash value of the home. A reasonable objective is to ask that Y (t) should double during the thirty years of the standard loan. That is we state as one objective that E{Y (30)} = 2P0 where E{·} denotes expected value. In general we might expect that y(t) should follow a curve such as 2P0 (eβt − 1) where the variable β is chosen so that e30β − 1 = 2. We will for the purpose of the example ask that the following constraints be met. E{Y (10k)} = 2P0 (e10kβ − 1),

k = 1, 2, 3

(3.15)

We want to ﬁnd an investment strategy that minimizes the ﬁnancial activity as well the variance of the constraints (3.15). The problem that we consider is hence on the following form. 8

Problem 3.1.

30

min E u

2

2

(u1 (t) + u2 (t) )dt + 0

3

2

wk Var (Y (10k))

k=1

subject to the constraints of equation (3.12), (3.13) and (3.15). We consider the case when wk = 0.01, i = 0.075 and P0 = 105 $. In Figure 1 we show a stochastic simulation of the result. In the upper left plot we show a simulation of the cost value Y . The upper right plot shows an estimate of the expected value E{Y (t)} obtained by averaging over 1000 simulations. The lower left plot shows the controls u1 , u2 (u1 in solid line and u2 in dashed line) and the lower right plot shows the P and V (P in solid line and V in dashed line). Y

EY

2000

2000

1500

1500

1000

1000

500

500

0

0

10

20

0

30

400

1500

200

1000

0

500

−200

0

−400

−500

−600

0

10

20

−1000

30

0

10

20

30

0

10

20

30

Figure 1: Stochastic simulation of the solution to the investment problem.

4

Path Planning for Mobile Robot

We consider the problem of steering a robot from rest at an initial condition (−5, 1) to rest at the ﬁnal position (−1, 5) in such a way that it stays inside the corridor in the upper left part of Figure 2. We assume the linear dynamics y¨ = u + w where w is a noise signal that take into account friction, irregularities in the ﬂoor, and other error sources. The linear dynamics can be obtained after feedback linearization of realistic robot models [7]. If we let the noise be modeled as a Brownian motion, then the state space realization of the robot dynamics becomes the stochastic system dX = (AX + Bu)dt + G(X)dW Y = CX 9

where

0 0 A= 0 0

1 0 0 0

0 0 0 0

0 0 , 1 0

0 1 B= 0 0

0 0 , 0 1

C=

0 0 x 2 0 G(x) = . 0 0 0 x4

1 0 0 0 , 0 0 1 0

Let us use the design equation 5

5

4

4

3

3

2

2

1

1

0

0 −5

−4

−3

−2

−1

0

−5

−4

−3

−2

−1

0

−5

−4

−3

−2

−1

0

3 5 2 4 1 3 0 2 −1 1 −2 0 −3

0

2

4

6

Figure 2: The upper left ﬁgure shows the initial and ﬁnal positions for the robot. The upper right ﬁgure shows one realization of the optimal path of the robot and the lower right ﬁgure shows the corresponding control signals, where u1 corresponds to the solid line and u2 is the dashed line. The weights w1 = 7 and w2 = 1 were used in the optimization problem (4.16). Finally, the lower right ﬁgure shows an estimate of the expected path obtained by averaging over 100 stochastic simulations. min E

0,x0

subj. to where

6

w1 |C1 X(3) − α1 | + w2 |C3 X(6) − α3 | + |u| dt 0 dX = (AX + Bu)dt + G(X)dW, X(0) = x0 2

2

2

E 0,x0 {C2 X(3)} = α2 , E 0,x0 {C3 X(6)} = α3

1 0 0 0 , C1 = 0 0 1 0

−1 α1 = 1

C2 = 1 0 1 0 ,

α2 = 0

1 0 C3 = 0 0

10

0 1 0 0

0 0 1 0

0 0 , 0 1

−1 0 α3 = . 5 0

(4.16)

Stage 0

Stage 1

Stage 2

Stage N−2 Stage N−1 Stage N

Figure 3: We consider optimization of the path from a given initial state at stage 1 to a given node αN at stage N . The path must pass trough one node in the graph at each stage. The idea behind the optimization problem is to divide the control problem into two stages. First, we steer the robot to the switching surface C2 x = α2 in such a way that the expected deviation from the point C1 x = α1 is small. Then in the next stage we steer to the ﬁnal position x = α3 in such a way that the variance of the deviation from this point is small. The integral cost is introduced to keep the expected control energy small. With the weights w1 = 7 and w2 = 1 we get the result in Figure 2. We see from the lower plot that the expected path of the robot stays well inside the corridor as desired. It is possible to push the trajectory further toward the middle of 6 0,x 2 0 the corridor by adding an integral cost E 0 q|Cx − y0 (t)| dt , where q ≥ 0 and y0 (t) is some nominal trajectory near the middle of the corridor. The corresponding optimization problem still belongs to the problem class considered in this paper.

5

Route optimization

We can also consider trajectory planning problems where we at each time instant can choose to interpolate one out of several points. This is illustrated in Figure 3, where we have ﬁxed initial and terminal nodes but can choose between several diﬀerent routes between these two nodes. For consistency with the previous discussion we assume that the initial node is the initial state vector, i.e., α0 = x0 . At all other time instants we have a number of possible nodes, each corresponding to a subspace on the form {x ∈ Rn : Cx = αkj }. The nodes at each stage are thus deﬁned by the set Sk = {αk1 , . . . , αkNk }. We deﬁne a set-valued function Nk : Sk → P(Sk+1 ) (P(Sk+1 ) is the set of all subsets of Sk+1 ) that deﬁnes the possible successors of each node. This means that Nk (Cxk ) ⊂ Sk+1 if Cxk ∈ Sk and Nk (Cxk ) = ∅ otherwise. We can now generalize (2.1) and (2.2) to the case when we not only optimize a continuous time trajectory interpolating a given number of interpolation points (nodes) but also optimize over the set of nodes at each time instant. Let us consider J0 (x0 ) =

min

u∈M(t0 ,tf )

subject to

N −1

wk+1 Var {CX(tk+1 )} + E 2

t0 ,x0

tk+1

σk (t, X, u)dt tk

k=0

dX = (Ak X + Bk u)dt + Gk (X)dZ, t ∈ [tk , tk+1 ], X(0) = x0 E t0 ,x0 {CX(tk+1 )} ∈ Nk (E t0 ,x0 {CX(tk )}),

11

k = 0, . . . , N − 1

(5.17)

For the investment example in the Section 3 this would mean that we can choose between a number of selected cash values in the intermediate time instances. If we let1 dX = (Ak X + Bk u)dt + Gk (X)dZ, X(tk ) = xk Uk (xk , αk+1 ) = u ∈ M(tk , tk+1 ) : E tk ,xk {CX(tk+1 )} = αk+1 tk+1 Ck (u, αk+1 ) = σk (t, X, u)dt + wk+1 |CX(tk+1 ) − αk+1 |2 tk

then the dynamic programming iteration for (5.17) can be written as Jk (xk ) =

min

min

αk+1 ∈Nk (Cxk ) u∈Uk (xk ,αk+1 )

E tk ,xk {Ck (u, αk+1 ) + Jk+1 (X(tk+1 ))} .

Note that if Cxk ∈ Sk then Nk (Cxk ) = ∅ and the optimal objective is unbounded. The solution of the inner optimization can be constructed as in Proposition 2.3. We notice that this value function is completely determined by the node for the case when C = I, i.e. when each node is a point in the state space of the dynamical system. For this case the route optimization can be performed by discrete dynamic programming iteration over the discrete state space Sk at each stage. Once we know the optimal route, i.e., the optimal αk ∈ Sk , k = 1, . . . , N , then we can construct the optimal control as in Proposition 2.3. For the general case when C has full row rank m < n, then the problem is much harder because Jk (xk ) is not completely determined by the choice of nodes at stage k, . . . , N . The problem is then computationally very expensive for large N . One way to reduce the computational complexity is to discretize the state vector in each node. In this way a suboptimal solution can be obtained by discrete dynamic programming. In essense this means that we optimize over a ﬁnite collection of predeﬁned trajectories, which is an approach that have been discussed for trajectory planning in, for example, [2, 6]. Another way to reduce the complexity is to prune the search tree in some eﬃscient way in order to ﬁnd an optimal (or a near optimal) sequence of nodes. In order to discuss this problem we introduce some new notation. We let α ¯ k = (αk , . . . , αN −1 , αN ), where αn ∈ Sn , denotes an admissible node sequence from stage k to the terminal ﬁxed node αN , and let ¯ k ∈ ΠN Ak = α n=k Sn : αn+1 ∈ N (αn ) be the set of all admissible node sequences from stage k + 1 to the terminal node αN . Similarly ¯ k+1 ∈ ΠN Ak (αk ) = α n=k+1 Sn : αn+1 ∈ N (αn ) is the set of all sequences of nodes from stage k to stage N with the k th node ﬁxed to be αk . In the same way as above, the value function in a particular node αk = Cxk at stage k can be computed as Jk (xk ) =

min

α ¯ k+1 ∈Ak (Cxk )

Jk (xk , α ¯ k+1 )

We would like to remove as many node sequences as possible from Ak during the dynamic programming recursion. Let Ak,O (αk ) denote sequences that are candidates for optimality (the index O In Ck (u, αk+1 ) we implicitly assume that X(t) satisﬁes the stochastic diﬀerential equation in the deﬁnition of Uk (xk , αk+1 ). 1

12

stands for the open candidates and is a standard notation). Then at each node during the dynamic programming recursion we take the following two steps (i) Ak,O (αk ) = {(αk+1 , α ¯ k+2 ) : α ¯ k+2 ∈ Ak+1,O (αk+1 ); αk+1 ∈ N (αk )}. (ii) Remove branches from Ak,O (αk ) that cannot correspond to the optimal node sequence. This gives us the value function Jk (xk ) = minα¯ k+1 ∈Ak,O (αk ) Jk (xk , α ¯ k+1 ). The second step is not easy since each function Jk (xk , α ¯ k+1 ) depends on the particular node sequence α ¯ k+1 ∈ Ak+1 but also on the the state vector at node k which depends on prior stages and in particular the initial condition. An alternative idea, discussed in for example [8, 9], is to replace step (ii) above with (ii) Remove branches from Ak,O (αk ) that are unlikely to be the optimal node sequence. This generally give suboptimal solutions but the computational complexity may be reduced significantly. Some appealing and promising ideas for implementing (ii) was suggested in [8, 9]. There they consider optimization of switching sequences for LQR control of switched linear systems. This is mathematically closely related to the route optimization considered in this section and their results can be adapted to the problem discussed in this section. Acknowledgement: The research was supported by Grants from the Swedish Research Council and the NSF.

References [1] D. P. Bertsekas. Dynamic Programming and Optimal Control, volume 1. Athena Scientiﬁc, 1995. [2] M. . Branicky, T. . Johansen, I. Petersen and E. Frazzoli On-line Techniques for Behavioral Programming In Proceedings of the 39th IEEE Conference on Decision and Control, pages 1840-1845, Sydney, Australia, 2000. [3] P. Crouch and J. W. Jackson. Dynamic interpolation for linear systems. In Proc. 29th IEEE Conf. on Decision and Control, pages 2312–2314, Hawaii, 1990. [4] M. Egerstedt and C. Martin. Optimal trajectory planning and smoothing splines. Automatica, 37(7):1057–1064, July 2000. [5] S.F. Ellermeyer, S.S. Pilyugin and R. Redheﬀer. Persistence Criteria for a Chemostat with variable Nutrient Input Journal of diﬀerential Equations, 171,132–147 (2001) [6] E. Frazzoli, M. A. Dahleh and E. Feron A Maneuver-Based Hybrid Control Architecture for Autonomous Vehicle Motion Planning In Software Enabled Control: Information Technology for Dynamical Systems, G. Balas and T. Samad editors, 2002. To appear. [7] J. Lawton, R. Beard and B. Young. A Decentralized Approach To Formation Maneuvers To appear in IEEE Transactions on Robotics and Automation.

13

[8] B. Lincoln and B. Bernhardsson. Eﬃcient pruning of search trees in LQR control of switched linear systems. In Proceedings of the 39th IEEE Conference on Decision and Control, pages 1828–1833, Sydney, Australia, 2000. [9] B. Lincoln and A. Rantzer. Optimizing linear system switching. In Proceedings of the 40th IEEE Conference on Decision and Control, pages 2063–2068, Orlando, FLA, USA, 2001. [10] B. Øksendal. Stochastic Diﬀerential Equations, An Introduction with Applications. Springer, Berlin Heidelberg, ﬁfth edition, 1998. [11] W. M. Wonham. On a matrix Riccati equation of stochastic control. Siam J. Control Optim., 6(4):681–697, 1968. [12] Z. Zhang, J. Tomlinson, and C. Martin. Splines and linear control theory. Acta Math. Appl., 49:1–34, 1997.

Appendix A: Proof of Proposition 2.2 We use the compact notation introduced in (2.7) and (2.8) after the proposition. If we apply the Lagrange multplier rule to (2.8) then we obtain tf t0 , x0

T Q

X

+ 2X

T Su

+ uT Ru]dt

V (x0 , α, t0 , tf ) = max min E [X λ u∈M(t0 ,tf ) 0 T

X(t

f)

+ X(tf ) Q0 X(tf ) + 2λT C(α)

= (A(t)

X

+ B(t)u)dt

X)dZ, subject to dX + G(t,

0) = x

0 X(t

The solution to the inner optimization can be obtained from the Hamilton-Jacobi-Bellman Equation (HJBE) n n T 2 ∂ V ∂V

x + Bu)

+1

x + 2

+ uT Ru

+ ∂V (A − aij xT Su = minm x

T Q u∈R ∂t ∂ x 2 ∂xi ∂xj i=1 j=1

0 x

x

Q

+ 2λ C(α) V (x, tf ) = x T

T

!

where T

aij = (σσ )ij =

" n n n T ( xk Gk )( xl Gl ) = xk xl Gki GTlj , k=1

l=1

ij

k,l=1

x, t) = x

T P¯ (t) where Gki is the ith row of Gk . With the value function V ( x we get ∂2V 1 1 ˆ P¯ ) aij = xk xl tr(GTk P Gl ) = x

T Π( x, 2 ∂xi ∂xj 2 n

n

i=1 j=1

n

n

k=1 l=1

T P¯ +

−1 (B If we plug the ansatz V ( x, t) = x

T P¯ (t) x into the HJBE we get the optimal control u = −R

x, and that the following Riccati equation must hold S)

+ S)

T,

+Q

+ Π( ˆ P¯ ) = (P¯ B

+ S)

R

−1 (P¯ B

T P¯ + P¯ A P¯˙ + A 14

0 + λT Ce

n+1 + eT C

T λ. The optimal cost becomes with boundary condition P¯ (tf ) = Q n+1

T0 P¯ (λ, t0 ) x0 V (x0 , α, t0 , tf ) = max x

(5.18)

λ

To perform the optimization with respect to λ we need to obtain an explicit expression of P as a function of λ. It turns out that we can use

λeT + en+1 λT N

T − λT W λen+1 eT P¯ = P + N n+1 n+1

and W are given in (2.7). This follows since where P , N

(tf )λeT + en+1 λT N

(tf )T + λT W (tf )λen+1 eT P¯ (tf ) = P (tf ) + N n+1 n+1 T T T

= Q0 + λ Cen+1 + e C λ n+1

and

+ S)

T − (A

−B

R

−1 (B

T P + S T ))T N

−Q

− Π( ˆ P¯ ) + (P B

+ S)

R

−1 (P B

λeTn+1

T P − P A P¯˙ = −A

−B

R

−1 (B

T P + S T )) − λT N

T (A

T B

R

−1 B

T N

λen+1 eT − en+1 λT N n+1

T P¯ − P¯ A

+ S)

T

−Q

− Π( ˆ P ) + (P¯ B

+ S)

R

−1 (P¯ B = −A which follows since

T + λT W λen+1 eT ) = 0,

T (en+1 λT N A n+1

T (en+1 λT N

T + λT W λen+1 eT ) = 0. B n+1 The optimization in (5.18) thus becomes

(t0 )λeT + en+1 λT N

(t0 )T − λT W (t0 )λen+1 eTn+1 ) max x

T0 (P (t0 ) + N x0 n+1 λ

(t0 )W (t0 )−1 N

(t0 )T ) =x

T0 (P (t0 ) + N x0 (t0 )−1 N

(t0 )T x and the optimal Lagrange multiplier is λ = W

0 . Finally, the optimal control becomes

T (P + N

W (t0 )−1 N

x

(t0 )T x

−1 (B

0 en+1T ) + S) u = −R

T P + S)

−1 (B

x−R

−1 B

T N

W (t0 )−1 N

(t0 )T x

0 = −R Considering the optimal control problem from the “initial point” (t, x(t)) gives the following feed −1 (B

T (P + N

W −1 N

We note that

T ) + S T )X. back formulation of the optimal control u = −R under the conditions of the proposition there exists solutions P¯ and P to the Riccati equations that are involved in the proof. Indeed, the special structure of the system matrices imply that only the upper left blocks of P¯ and P satisfy nonlinear equations, which are identical to the ﬁrst equation in (2.6). The other blocks corresponds to p and ρ in (2.6). Hence, the existence of P¯ and P follows from Remark 2.1.

15

Clyde Martin Department of Mathematics and Statistics Texas Tech University Lubbock, Texas, USA Yishao Zhou Department of Mathematics Stockholm University SE-106 91 Stockholm, Sweden

Abstract A trajectory planning problem for linear stochastic diﬀerential equations is considered in this paper. The aim is to control the system such that the expected value of the output interpolates given points at given times while the variance and an integral cost of the control eﬀort are minimized. The solution is obtained by using the dynamic programming technique, which allows for several possible generalizations that are also considered. The results of this paper can be used for control of systems with a stochastic growth rate. This is frequently the case in investment problems and in biomathematics.

1

Introduction

We are concerned with controlling the state or some measured quantity of the state to given values at discrete times. That is given a system of the form x˙ = f (x) + u1 g1 (x) + · · · + uk gk (x) and y = Cx where x ∈ Rn and y ∈ Rp we look for a system of controls that drive the output through or close to a prescribed sequence of set points {y(ti ) = αi : i = 1, · · · , N }. In this paper we specialize this problem to one in which the system is linear but the evolution equation is stochastic. Such systems occur in a variety of settings. We consider a rather complete example of a model that contains variable interest rates in Section 3. A second example, considered in Section 4, is trajectory planning for a feedback linearized mobile robot. Here the combined eﬀect of surface irregularities, friction, and other disturbances is modeled as a multiplicative stochastic disturbance. Another set of problems occurs in biomathematics when considering the chemostat. There are extensive references on physical, biological and mathematical background of the chemostat available in the literature, see for example [5]. In many situations the nutrient content of 1

the chemostat is not deterministic and a better model would be obtained by replacing the linear growth rate of the standard models by a random variable. This variable reﬂects the rate at which the nutrient is consumed and/or naturally degrades. This rate is a function of temperature and many other variables that are not modelled and hence could and should be taken to be stochastic. It is beyond the scope of this paper to consider a complete analysis of such systems but a model in this form ﬁts the theory described in this paper. The speciﬁc topic of this paper is the problem of designing a feedback control for a linear stochastic diﬀerential equation in such a way that the mean value of a speciﬁc output interpolates given points at given times at the same time as the variance and the expected value of an integral quadratic cost along the trajectory are minimized. By minimizing the variance at the interpolation points we minimize eﬀect of uncertainty due to the diﬀusion term in the system equation. By using a new notation we obtain compact formulas for the design equations. The problem is related to several methods based on optimal control for interpolating splines that recently have been proposed in the literature, see for example [4, 3, 12]. Apart from considering stochastic models, this paper distinguishes itself by using a dynamic programming approach which leads to feedback solutions of the corresponding control problem. In Section 5 we discuss a generalization, where we optimize over a ﬁnite set of possible interpolation points at each time instance. This problem reminds of the standard shortest path problem, see e.g. [1], but is complicated by the dynamic nature which make the cost corresponding to each node dependent on future interpolation points as well as the initial condition. The complexity of this problem can be reduced by considering suboptimal techniques such as discretization or tree pruning. We brieﬂy review some related work in [2, 6, 8, 9].

2

Stochastic Trajectory Planning

We consider a path planning problem where a stochastic disturbance enters the diﬀerential equation. As our basic problem we consider J0 (x0 ) =

min

u∈M(t0 ,tN )

subject to

N −1

wk+1 Var {CX(tk+1 )} + E 2

k=0

t0 ,x0

dX = (AX + Bu)dt + G(X)dZ, E t0 ,x0 {CX(tk+1 )} = αk+1 ,

tN

|u| dt 2

0

(2.1) X(t0 ) = x0

k = 0, . . . , N − 1

where wk ≥ 0 and (A, B) is a controllable pair. The last term in the stochastic diﬀerential equation corresponds to a multiplicative noise on the state vector deﬁned by a m-dimensional Brownian motion Zt . In other words, Zt is a process with zero mean and independent increments, i.e. E{dZt dZsT } = δ(t − s)Idt, where δ denotes the dirac function. We assume that G(x) is a linear function on the form n xi Gi G(x) = i=1

Rn×m .

where Gk ∈ In (2.1), is the expectation operator given that X(t0 ) = x0 and 2 t ,x t ,x 0 0 0 Var {CX(tk )} = E {(CX(tk ) − E 0 {CX(tk )})2 }. E t0 ,x0

2

We consider optimization over all Markov controls, i.e. feedback controls on the form u(t, ω) = µ(t, X(t, ω)). We let M(t0 , tN ) denote the set of Markov controls on an interval (t0 , tN ). It can be shown that our optimal solution also is optimal over all F t adapted processes, where Ft denotes the σ algebra generated by Zs for s ≤ t, see e.g. [10]. It turns out that there will be linear and constant terms in the value function due to the variance term in the objective function. It is therefore no essential addition in complexity to consider a more general path planning problem, where we allow the dynamics to be time-varying and diﬀerent from stage to stage. We also consider integral costs that penalizes the state vector. This gives the following generalization of (2.1). N −1 tk+1 J0 (x0 ) = min E t0 ,x0 σk (t, X, u)dt wk+1 |C1 X(tk+1 ) − βk+1 |2 + u∈M(t0 ,tN )

subject to

tk

k=0

dX = (Ak (t)X + Bk (t)u)dt + Gk (t, X)dZ, t ∈ [tk , tk+1 ], X(t0 ) = x0 E t0 ,x0 {C2 X(tk+1 )} = αk+1 ,

k = 0, . . . , N − 1 (2.2)

where σk (t, x, u) = xT Qk (t)x + 2xT Sk (t)u + uT Rk (t)u + 2qk (t)T x + 2rk (t)T u + k (t) and Gk (t, x) =

n

xi Gk,i (t)

i=1

and where everything else is deﬁned in a similar way as in (2.1). Here Ak , Bk , Qk , . . . , ρk and Gk,i are piecewise continuous functions of time and all pairs (Ak (t), Bk (t)) are assumed to be completely controllable and the cost is strictly convex, which is the case if for all t ∈ [tk , tk+1 ], k = 0, . . . , N −1, we have

Qk (t) Sk (t) ≥0 (2.3) Rk (t) > 0 and Sk (t)T Rk (t) Note that if C1 = C2 = C and αk = βk then E t0 ,x0 |CX(tk ) − βk |2 = Var2 {CX(tk )}. Let us deﬁne the cost-to-go functions N −1 ti+1 Jk (x) = min E tk ,x σi (t, X, u)dt wi+1 |C1 X(ti+1 ) − βi+1 |2 + u∈M(tk ,tN )

subject to

ti

i=k

dX = (Ai (t)X + Bi (t)u)dt + Gi (t, X)dZ, t ∈ [ti , ti+1 ], X(tk ) = x E t0 ,x0 {C2 X(ti+1 )} = αi+1 ,

i = k, . . . , N − 1

JN (x) =0 (2.4) Due to the Markov property of the stochastic diﬀerential equation we can use dynamic programming to solve (2.1). We next state two propositions and then the main result that solves (2.1). The ﬁrst states the dynamic programming recursion. 3

Proposition 2.1. The optimal cost satisﬁes the following dynamic programming equation. tk+1 tk ,x 2 Jk (x) = min E σk (t, X, u)dt + wk+1 |C1 X(tk+1 ) − βk+1 | + Jk+1 (X(tk+1 )) u∈M(tk ,tk+1 )

subject to

tk

dX = (Ak (t)X + Bk (t)u)dt + Gk (t, X)dZ, E tk ,x {C2 X(tk+1 )}

X(tk ) = x

= αk+1

JN (x) = 0 The next proposition states the solution to the following stochastic optimal control problem. It shows that Jk (x) is a quadratic function which is instrumental in solving the dynamic programming iteration. tf t0 ,x0 T T V (x0 , α, t0 , tf ) = min E σ(t, X(t), u(t))dt + X(tf ) Q0 X(tf ) + 2q0 X(tf ) + 0 u∈M(t0 ,tf )

subject to

t0

dX = (A(t)X + B(t)u)dt + G(t, X)dZ,

X(t0 ) = x0

E t0 ,x0 {CX(tf )} = α (2.5)

where σ(t, x, u) = xT Q(t)x + 2xT S(t)u + uT R(t)u + 2q(t)T x + 2r(t)T u + (t) and where Q, R, and S satisﬁes the conditions in (2.3). Proposition 2.2. We have V (x0 , α, t0 , tf ) = xT0 P (t0 )x0 + 2p(t0 )T x0 + ρ(t0 ) + (N (t0 )T x0 + m(t0 ))T W (t0 )−1 (N (t0 )T x0 + m(t0 )) where P˙ + AT P + P A + Q + Π(P ) = (P B + S)R−1 (P B + S)T , N˙ + (A − BR−1 (B T P + S T ))T N = 0, N (tf ) = C T p˙ + (A − BR−1 (B T P + S T ))T p + q = (P B + S)R−1 r, ˙ + N BR−1 B T N = 0, W (tf ) = 0 W m ˙ = N T BR−1 (B T p + r),

P (tf ) = Q0

p(tf ) = q0

(2.6)

m(tf ) = −α

ρ˙ + = (r + B T p)T R−1 (r + B T p),

ρ(tf ) = 0

and where Π(P ) is a linear matrix function with elements Π(P )k,l = control is

1 T 2 tr(Gk P Gl ).

The optimal

u∗ = −R−1 (B T P + S T )X − R−1 B T N v − R−1 (B T p + r) with v = −W (t0 )−1 (N (t0 )T x0 + m(t0 )). Remark 2.1. If Q, R and S satisﬁes a condition analogous to (2.3), then the linearly perturbed Riccati equation in (2.6) has an absolutely continuous unique positive semideﬁnite solution [11]. All other diﬀerential equations in (2.6) are linear with bounded piecewise continuous coeﬃcients, which ensure existence of unique absolutely continuous solutions. Note also that we have W (t) > 0 for t ∈ [t0 , tf ) by the complete controllability of (A(t), B(t)). 4

Proof. The proof is done by Lagrangian relaxation of the linear constraint. See Appendix A for the complete details. In order to obtain a compact notation we introduce

Q q S q Q 0 0

=

0 = , Q , S = T , Q T T q q 0 0 r

P p = W, R

=R

= N , W P = T , N p ρ mT

A 0

= B , C(α)

= = C −α , , B A 0 0 0

G(t, x) Π(P ) 0 ˆ x) = ˆ P ) = G(t, , Π( . 0 0 0

= XT If we ﬁnally let X written V (x0 , α, t0 , tf ) =

min

u∈M(t0 ,tf )

T

1

and x

0 = xT0

E

subject to

t0 , x0

tf

T

1

(2.7)

then the optimization problem (2.5) can be

T Q

X

+ 2X

T Su

+ uT Ru]dt

f )T Q

0 X(t

f) [X + X(t

t0

= (A(t)

X

+ B(t)u)dt

X)dZ, dX + G(t,

X(t

f )} = 0 E t0 , x0 {C(α)

0) = x

0 X(t

(2.8)

and the optimal solution can be written

(t0 )W (t0 )−1 N

(t0 )T x

T0 P (t0 ) + N

0 V (x0 , α, t0 , tf ) = x where ˙

T P + P A

+ S)

T,

+Q

+ Π( ˆ P ) = (P B

+ S)

R

−1 (P B P + A ˙

T P + S T ))T N

T

= 0, N

(tf ) = C(α)

−B

R

−1 (B N + (A ˙

T B

R

−1 B

T N

= 0, W (tf ) = 0 W +N

0 P (tf ) = Q

The optimal control is

−1 (B

T P + S)

x−R

−1 B

T N

W (t0 )−1 N

(t0 )T x u∗ = −R

0

−1 (B

T (P + N

W −1 N

T ) + S T )X. There is an equivalent feedback form given as u∗ = −R We can now state the solution of the general stochastic trajectory planning problem in (2.2). Proposition 2.3. Consider the optimal control problem in (2.2), where the condition (2.3) holds. The optimal Markov control in each time interval [tk , tk+1 ] is (all variables are deﬁned analogously with (2.7))

k (t)−1 (B

k (t)T P k (t) + S k (t)T )X(t)

−R

k (t)−1 B k (tk )−1 N

k (t)T N

k (t)W

k (tk )T X(t

k) u∗ (t) = −R 5

where for k = N − 1, . . . , 0 ˙

T P k + P k A

k + Π ˆ k (P k ) = (P k B

−1 (P k B

k + Q

k + S k )R

k + S k )T , P k + A k k ˙

−1 T

T T

k (tk+1 ) = C

2 (αk+1 )T ,

N N k + (Ak − Bk Rk (Bk Pk + Sk )) Nk = 0, ˙

T −1 T k (tk+1 ) = 0, W W k + Nk Bk Rk Bk Nk = 0,

1 (βN )T C

1 (βN ) and for k = N − 2, N − 3, . . . , 0 and where P N −1 (tN ) = wN C

k+1 (tk+1 )W k+1 (tk+1 )−1 N

k+1 (tk+1 )T + wk+1 C

1 (βk+1 )T C

1 (βk+1 ). P k (tk+1 ) = P k+1 (tk+1 ) + N The cost-to-go is

k (tk )W k (tk )−1 N

k (tk )T x

.

T P k (tk ) + N Jk (x) = x Proof. By dynamic programming. The formulation of Proposition 2.3 in the compact notation (2.7) gives appealing formulas but in a numerical implementation it is more eﬃcient to perform computations using a system on the form (2.6). For the basic trajectory planning in (2.1) this reduces to the following result Corollary 2.1. Consider the optimal control problem in (2.1), where the pair (A, B) is controllable. The optimal Markov control in each time interval [tk , tk+1 ] is u∗ (t) = −R−1 B T Pk (t)X(t) − R−1 B T Nk (t)vk − R−1 B T pk (t) with vk = −Wk (tk )−1 (Nk (tk )T X(tk ) + mk (tk )), where P˙k + AT Pk + Pk A + Π(Pk ) = Pk BR−1 B T Pk , N˙ k + (A − BR−1 B T Pk )T Nk = 0, p˙k + (A − BR−1 B T Pk )T pk = 0, ˙ k + N T BR−1 B T Nk = 0, W

(2.9)

k m ˙ k = NkT BR−1 B T pk , ρ˙ k = pTk BR−1 B T pk ,

where Π(Pk ) is a linear matrix function with elements Π(P )k,l = 12 tr(GTk P Gl ) and the boundary conditions are Pk (tk+1 ) = Pk+1 (tk+1 ) + Nk+1 (tk+1 )Wk+1 (tk+1 )−1 Nk+1 (tk+1 )T + wk+1 C T C T CT pk (tk+1 ) = pk+1 (tk+1 ) + Nk+1 (tk+1 )Wk+1 (tk+1 )−1 mk+1 (tk+1 ) + wk+1 αk+1 T ρk (tk+1 ) = ρk+1 (tk+1 ) + mk+1 (tk+1 )T Wk+1 (tk+1 )−1 mk+1 (tk+1 ) + wk+1 αk+1 αk+1

and Nk (tk+1 ) = C T , mk (tk+1 ) = −αk+1 and Wk (tk+1 ) = 0. The optimal cost-to-go is Jk (x) = xT Pk (tk )x + 2pk (tk )T x + ρk (tk ) + (Nk (tk )T x + mk (tk ))T Wk (tk )−1 (Nk (tk )T x + mk (tk )) 6

3

A Homeowners Investment Problem

We consider a very simple problem arising in portfolio optimization. For most people their home is the most dominant feature of the portfolio. In this example we consider the problem of controlling the investment in the home when interest rates ﬂuctuate and inﬂation, either positive or negative, occurs. In this example we are aware that in addition to making monthly payments the homeowner will usually make improvements to the home thus increasing the value over and above the increase caused by inﬂation, some features of the home deprecate lowering the value of the home and the homeowner has the option to borrow against the value of the home at several points during the course of ownership. We model the value of the home in terms of two linear stochastic diﬀerential equations and allow the noise to enter as a multiple of the state. The variables are categorized in Table 3.

P (t)

is the principal remaining on the loan at time t

P0

initial purchase price

V (t)

the actual market value of the home at time t

u1 (t)

the amount of cash either paid toward the principal and interest or withdrawn from the home in terms of equity loans

u2 (t)

money applied to the value of the house in terms of improvements or taken form the value of the house through the sale of assets

dZ(t)

a Brownian motion processes representing inﬂation and ﬂuctuation in interest

i

nominal annual interest rate

Table 1: Variables We begin by considering the discrete time case. The fundamental law of payments can be expressed as the principal at time n + 1 is equal to the principal at time n plus interest drawn during the time period plus or minus the payment, i.e., Pn+1 = (1 + in )Pn ± Qn where Pn denotes principal, in denotes the interest in the nth period and Qn denotes the nth payment. Now using the notation from the table we have P (t + h) = P (t) + h(i(t)P (t) − u1 (t) + u2 (t)), P (0) = P0

(3.10)

where u1 (t) is rate of payment. The control u2 (t) represents money borrowed from the bank to make improvements. This variable enters into the equation for the value of the house. We divide 7

the interest rate into two parts. The ﬁrst is a nominal rate and the second is ”random” motion of the rate. For the sake of simplicity we assume that this rate is equal to the rate of inﬂation. We can think of i(t) = i + Z(t + h) − Z(t) where Z(t + h) − Z(t) is an increment of Brownian motion. The second part of the model is the market value of the home. Here we have V (t + h) = V (t) + hu2 (t) + (Z(t + h) − Z(t))V (t), V (0) = P0

(3.11)

We allow both u1 and u2 to be either positive or negative. If u1 is negative we consider this to be money that is withdrawn from the value of the home as an equity loan. If u2 is negative we consider this to be money that is achieved through the sale of assets associated with home. Any such money will be assumed to be applied to the principal. This is not a real restriction since we can immediately borrow this money in terms of an equity loan. We can pass to the limit to obtain a continuous time model. For the continuation of this model we take the following equations describing the evolution of the value of the home. dP (t) = iP (t)dt + (u2 (t) − u1 (t))dt + P (t)dZ(t), P (0) = P0

(3.12)

dV (t) = V (t)dZ(t) + u2 (t)dt, V (0) = P0

(3.13)

In the ﬁrst equation we see that dZ(t) can denote the variation that occurs in interest rates. This is of course an approximation of reality. It is unlikely that interest rates would ever become negative and in most home loans there is a provision to bound from above the interest rate. The incorporation of these bounds complicates the problem and obscures the application that we wish to exhibit. In the second equation dZ(t) denotes the eﬀect of inﬂation on the value of the home. At one time it would not have seemed reasonable to have negative inﬂation. However during the 1990s and into the beginning of the 21 century we have seen negative inﬂation in Japan. To use the same rate for inﬂation and interest ﬂuctuation is not entirely correct but for the purpose of an example to support the theory developed in this paper it is adequate. The variable of primary interest is a linear combination of P (t) and V (t). We let Y (t) = V (t) − P (t)

(3.14)

this variable represents the cash value of the home. A reasonable objective is to ask that Y (t) should double during the thirty years of the standard loan. That is we state as one objective that E{Y (30)} = 2P0 where E{·} denotes expected value. In general we might expect that y(t) should follow a curve such as 2P0 (eβt − 1) where the variable β is chosen so that e30β − 1 = 2. We will for the purpose of the example ask that the following constraints be met. E{Y (10k)} = 2P0 (e10kβ − 1),

k = 1, 2, 3

(3.15)

We want to ﬁnd an investment strategy that minimizes the ﬁnancial activity as well the variance of the constraints (3.15). The problem that we consider is hence on the following form. 8

Problem 3.1.

30

min E u

2

2

(u1 (t) + u2 (t) )dt + 0

3

2

wk Var (Y (10k))

k=1

subject to the constraints of equation (3.12), (3.13) and (3.15). We consider the case when wk = 0.01, i = 0.075 and P0 = 105 $. In Figure 1 we show a stochastic simulation of the result. In the upper left plot we show a simulation of the cost value Y . The upper right plot shows an estimate of the expected value E{Y (t)} obtained by averaging over 1000 simulations. The lower left plot shows the controls u1 , u2 (u1 in solid line and u2 in dashed line) and the lower right plot shows the P and V (P in solid line and V in dashed line). Y

EY

2000

2000

1500

1500

1000

1000

500

500

0

0

10

20

0

30

400

1500

200

1000

0

500

−200

0

−400

−500

−600

0

10

20

−1000

30

0

10

20

30

0

10

20

30

Figure 1: Stochastic simulation of the solution to the investment problem.

4

Path Planning for Mobile Robot

We consider the problem of steering a robot from rest at an initial condition (−5, 1) to rest at the ﬁnal position (−1, 5) in such a way that it stays inside the corridor in the upper left part of Figure 2. We assume the linear dynamics y¨ = u + w where w is a noise signal that take into account friction, irregularities in the ﬂoor, and other error sources. The linear dynamics can be obtained after feedback linearization of realistic robot models [7]. If we let the noise be modeled as a Brownian motion, then the state space realization of the robot dynamics becomes the stochastic system dX = (AX + Bu)dt + G(X)dW Y = CX 9

where

0 0 A= 0 0

1 0 0 0

0 0 0 0

0 0 , 1 0

0 1 B= 0 0

0 0 , 0 1

C=

0 0 x 2 0 G(x) = . 0 0 0 x4

1 0 0 0 , 0 0 1 0

Let us use the design equation 5

5

4

4

3

3

2

2

1

1

0

0 −5

−4

−3

−2

−1

0

−5

−4

−3

−2

−1

0

−5

−4

−3

−2

−1

0

3 5 2 4 1 3 0 2 −1 1 −2 0 −3

0

2

4

6

Figure 2: The upper left ﬁgure shows the initial and ﬁnal positions for the robot. The upper right ﬁgure shows one realization of the optimal path of the robot and the lower right ﬁgure shows the corresponding control signals, where u1 corresponds to the solid line and u2 is the dashed line. The weights w1 = 7 and w2 = 1 were used in the optimization problem (4.16). Finally, the lower right ﬁgure shows an estimate of the expected path obtained by averaging over 100 stochastic simulations. min E

0,x0

subj. to where

6

w1 |C1 X(3) − α1 | + w2 |C3 X(6) − α3 | + |u| dt 0 dX = (AX + Bu)dt + G(X)dW, X(0) = x0 2

2

2

E 0,x0 {C2 X(3)} = α2 , E 0,x0 {C3 X(6)} = α3

1 0 0 0 , C1 = 0 0 1 0

−1 α1 = 1

C2 = 1 0 1 0 ,

α2 = 0

1 0 C3 = 0 0

10

0 1 0 0

0 0 1 0

0 0 , 0 1

−1 0 α3 = . 5 0

(4.16)

Stage 0

Stage 1

Stage 2

Stage N−2 Stage N−1 Stage N

Figure 3: We consider optimization of the path from a given initial state at stage 1 to a given node αN at stage N . The path must pass trough one node in the graph at each stage. The idea behind the optimization problem is to divide the control problem into two stages. First, we steer the robot to the switching surface C2 x = α2 in such a way that the expected deviation from the point C1 x = α1 is small. Then in the next stage we steer to the ﬁnal position x = α3 in such a way that the variance of the deviation from this point is small. The integral cost is introduced to keep the expected control energy small. With the weights w1 = 7 and w2 = 1 we get the result in Figure 2. We see from the lower plot that the expected path of the robot stays well inside the corridor as desired. It is possible to push the trajectory further toward the middle of 6 0,x 2 0 the corridor by adding an integral cost E 0 q|Cx − y0 (t)| dt , where q ≥ 0 and y0 (t) is some nominal trajectory near the middle of the corridor. The corresponding optimization problem still belongs to the problem class considered in this paper.

5

Route optimization

We can also consider trajectory planning problems where we at each time instant can choose to interpolate one out of several points. This is illustrated in Figure 3, where we have ﬁxed initial and terminal nodes but can choose between several diﬀerent routes between these two nodes. For consistency with the previous discussion we assume that the initial node is the initial state vector, i.e., α0 = x0 . At all other time instants we have a number of possible nodes, each corresponding to a subspace on the form {x ∈ Rn : Cx = αkj }. The nodes at each stage are thus deﬁned by the set Sk = {αk1 , . . . , αkNk }. We deﬁne a set-valued function Nk : Sk → P(Sk+1 ) (P(Sk+1 ) is the set of all subsets of Sk+1 ) that deﬁnes the possible successors of each node. This means that Nk (Cxk ) ⊂ Sk+1 if Cxk ∈ Sk and Nk (Cxk ) = ∅ otherwise. We can now generalize (2.1) and (2.2) to the case when we not only optimize a continuous time trajectory interpolating a given number of interpolation points (nodes) but also optimize over the set of nodes at each time instant. Let us consider J0 (x0 ) =

min

u∈M(t0 ,tf )

subject to

N −1

wk+1 Var {CX(tk+1 )} + E 2

t0 ,x0

tk+1

σk (t, X, u)dt tk

k=0

dX = (Ak X + Bk u)dt + Gk (X)dZ, t ∈ [tk , tk+1 ], X(0) = x0 E t0 ,x0 {CX(tk+1 )} ∈ Nk (E t0 ,x0 {CX(tk )}),

11

k = 0, . . . , N − 1

(5.17)

For the investment example in the Section 3 this would mean that we can choose between a number of selected cash values in the intermediate time instances. If we let1 dX = (Ak X + Bk u)dt + Gk (X)dZ, X(tk ) = xk Uk (xk , αk+1 ) = u ∈ M(tk , tk+1 ) : E tk ,xk {CX(tk+1 )} = αk+1 tk+1 Ck (u, αk+1 ) = σk (t, X, u)dt + wk+1 |CX(tk+1 ) − αk+1 |2 tk

then the dynamic programming iteration for (5.17) can be written as Jk (xk ) =

min

min

αk+1 ∈Nk (Cxk ) u∈Uk (xk ,αk+1 )

E tk ,xk {Ck (u, αk+1 ) + Jk+1 (X(tk+1 ))} .

Note that if Cxk ∈ Sk then Nk (Cxk ) = ∅ and the optimal objective is unbounded. The solution of the inner optimization can be constructed as in Proposition 2.3. We notice that this value function is completely determined by the node for the case when C = I, i.e. when each node is a point in the state space of the dynamical system. For this case the route optimization can be performed by discrete dynamic programming iteration over the discrete state space Sk at each stage. Once we know the optimal route, i.e., the optimal αk ∈ Sk , k = 1, . . . , N , then we can construct the optimal control as in Proposition 2.3. For the general case when C has full row rank m < n, then the problem is much harder because Jk (xk ) is not completely determined by the choice of nodes at stage k, . . . , N . The problem is then computationally very expensive for large N . One way to reduce the computational complexity is to discretize the state vector in each node. In this way a suboptimal solution can be obtained by discrete dynamic programming. In essense this means that we optimize over a ﬁnite collection of predeﬁned trajectories, which is an approach that have been discussed for trajectory planning in, for example, [2, 6]. Another way to reduce the complexity is to prune the search tree in some eﬃscient way in order to ﬁnd an optimal (or a near optimal) sequence of nodes. In order to discuss this problem we introduce some new notation. We let α ¯ k = (αk , . . . , αN −1 , αN ), where αn ∈ Sn , denotes an admissible node sequence from stage k to the terminal ﬁxed node αN , and let ¯ k ∈ ΠN Ak = α n=k Sn : αn+1 ∈ N (αn ) be the set of all admissible node sequences from stage k + 1 to the terminal node αN . Similarly ¯ k+1 ∈ ΠN Ak (αk ) = α n=k+1 Sn : αn+1 ∈ N (αn ) is the set of all sequences of nodes from stage k to stage N with the k th node ﬁxed to be αk . In the same way as above, the value function in a particular node αk = Cxk at stage k can be computed as Jk (xk ) =

min

α ¯ k+1 ∈Ak (Cxk )

Jk (xk , α ¯ k+1 )

We would like to remove as many node sequences as possible from Ak during the dynamic programming recursion. Let Ak,O (αk ) denote sequences that are candidates for optimality (the index O In Ck (u, αk+1 ) we implicitly assume that X(t) satisﬁes the stochastic diﬀerential equation in the deﬁnition of Uk (xk , αk+1 ). 1

12

stands for the open candidates and is a standard notation). Then at each node during the dynamic programming recursion we take the following two steps (i) Ak,O (αk ) = {(αk+1 , α ¯ k+2 ) : α ¯ k+2 ∈ Ak+1,O (αk+1 ); αk+1 ∈ N (αk )}. (ii) Remove branches from Ak,O (αk ) that cannot correspond to the optimal node sequence. This gives us the value function Jk (xk ) = minα¯ k+1 ∈Ak,O (αk ) Jk (xk , α ¯ k+1 ). The second step is not easy since each function Jk (xk , α ¯ k+1 ) depends on the particular node sequence α ¯ k+1 ∈ Ak+1 but also on the the state vector at node k which depends on prior stages and in particular the initial condition. An alternative idea, discussed in for example [8, 9], is to replace step (ii) above with (ii) Remove branches from Ak,O (αk ) that are unlikely to be the optimal node sequence. This generally give suboptimal solutions but the computational complexity may be reduced significantly. Some appealing and promising ideas for implementing (ii) was suggested in [8, 9]. There they consider optimization of switching sequences for LQR control of switched linear systems. This is mathematically closely related to the route optimization considered in this section and their results can be adapted to the problem discussed in this section. Acknowledgement: The research was supported by Grants from the Swedish Research Council and the NSF.

References [1] D. P. Bertsekas. Dynamic Programming and Optimal Control, volume 1. Athena Scientiﬁc, 1995. [2] M. . Branicky, T. . Johansen, I. Petersen and E. Frazzoli On-line Techniques for Behavioral Programming In Proceedings of the 39th IEEE Conference on Decision and Control, pages 1840-1845, Sydney, Australia, 2000. [3] P. Crouch and J. W. Jackson. Dynamic interpolation for linear systems. In Proc. 29th IEEE Conf. on Decision and Control, pages 2312–2314, Hawaii, 1990. [4] M. Egerstedt and C. Martin. Optimal trajectory planning and smoothing splines. Automatica, 37(7):1057–1064, July 2000. [5] S.F. Ellermeyer, S.S. Pilyugin and R. Redheﬀer. Persistence Criteria for a Chemostat with variable Nutrient Input Journal of diﬀerential Equations, 171,132–147 (2001) [6] E. Frazzoli, M. A. Dahleh and E. Feron A Maneuver-Based Hybrid Control Architecture for Autonomous Vehicle Motion Planning In Software Enabled Control: Information Technology for Dynamical Systems, G. Balas and T. Samad editors, 2002. To appear. [7] J. Lawton, R. Beard and B. Young. A Decentralized Approach To Formation Maneuvers To appear in IEEE Transactions on Robotics and Automation.

13

[8] B. Lincoln and B. Bernhardsson. Eﬃcient pruning of search trees in LQR control of switched linear systems. In Proceedings of the 39th IEEE Conference on Decision and Control, pages 1828–1833, Sydney, Australia, 2000. [9] B. Lincoln and A. Rantzer. Optimizing linear system switching. In Proceedings of the 40th IEEE Conference on Decision and Control, pages 2063–2068, Orlando, FLA, USA, 2001. [10] B. Øksendal. Stochastic Diﬀerential Equations, An Introduction with Applications. Springer, Berlin Heidelberg, ﬁfth edition, 1998. [11] W. M. Wonham. On a matrix Riccati equation of stochastic control. Siam J. Control Optim., 6(4):681–697, 1968. [12] Z. Zhang, J. Tomlinson, and C. Martin. Splines and linear control theory. Acta Math. Appl., 49:1–34, 1997.

Appendix A: Proof of Proposition 2.2 We use the compact notation introduced in (2.7) and (2.8) after the proposition. If we apply the Lagrange multplier rule to (2.8) then we obtain tf t0 , x0

T Q

X

+ 2X

T Su

+ uT Ru]dt

V (x0 , α, t0 , tf ) = max min E [X λ u∈M(t0 ,tf ) 0 T

X(t

f)

+ X(tf ) Q0 X(tf ) + 2λT C(α)

= (A(t)

X

+ B(t)u)dt

X)dZ, subject to dX + G(t,

0) = x

0 X(t

The solution to the inner optimization can be obtained from the Hamilton-Jacobi-Bellman Equation (HJBE) n n T 2 ∂ V ∂V

x + Bu)

+1

x + 2

+ uT Ru

+ ∂V (A − aij xT Su = minm x

T Q u∈R ∂t ∂ x 2 ∂xi ∂xj i=1 j=1

0 x

x

Q

+ 2λ C(α) V (x, tf ) = x T

T

!

where T

aij = (σσ )ij =

" n n n T ( xk Gk )( xl Gl ) = xk xl Gki GTlj , k=1

l=1

ij

k,l=1

x, t) = x

T P¯ (t) where Gki is the ith row of Gk . With the value function V ( x we get ∂2V 1 1 ˆ P¯ ) aij = xk xl tr(GTk P Gl ) = x

T Π( x, 2 ∂xi ∂xj 2 n

n

i=1 j=1

n

n

k=1 l=1

T P¯ +

−1 (B If we plug the ansatz V ( x, t) = x

T P¯ (t) x into the HJBE we get the optimal control u = −R

x, and that the following Riccati equation must hold S)

+ S)

T,

+Q

+ Π( ˆ P¯ ) = (P¯ B

+ S)

R

−1 (P¯ B

T P¯ + P¯ A P¯˙ + A 14

0 + λT Ce

n+1 + eT C

T λ. The optimal cost becomes with boundary condition P¯ (tf ) = Q n+1

T0 P¯ (λ, t0 ) x0 V (x0 , α, t0 , tf ) = max x

(5.18)

λ

To perform the optimization with respect to λ we need to obtain an explicit expression of P as a function of λ. It turns out that we can use

λeT + en+1 λT N

T − λT W λen+1 eT P¯ = P + N n+1 n+1

and W are given in (2.7). This follows since where P , N

(tf )λeT + en+1 λT N

(tf )T + λT W (tf )λen+1 eT P¯ (tf ) = P (tf ) + N n+1 n+1 T T T

= Q0 + λ Cen+1 + e C λ n+1

and

+ S)

T − (A

−B

R

−1 (B

T P + S T ))T N

−Q

− Π( ˆ P¯ ) + (P B

+ S)

R

−1 (P B

λeTn+1

T P − P A P¯˙ = −A

−B

R

−1 (B

T P + S T )) − λT N

T (A

T B

R

−1 B

T N

λen+1 eT − en+1 λT N n+1

T P¯ − P¯ A

+ S)

T

−Q

− Π( ˆ P ) + (P¯ B

+ S)

R

−1 (P¯ B = −A which follows since

T + λT W λen+1 eT ) = 0,

T (en+1 λT N A n+1

T (en+1 λT N

T + λT W λen+1 eT ) = 0. B n+1 The optimization in (5.18) thus becomes

(t0 )λeT + en+1 λT N

(t0 )T − λT W (t0 )λen+1 eTn+1 ) max x

T0 (P (t0 ) + N x0 n+1 λ

(t0 )W (t0 )−1 N

(t0 )T ) =x

T0 (P (t0 ) + N x0 (t0 )−1 N

(t0 )T x and the optimal Lagrange multiplier is λ = W

0 . Finally, the optimal control becomes

T (P + N

W (t0 )−1 N

x

(t0 )T x

−1 (B

0 en+1T ) + S) u = −R

T P + S)

−1 (B

x−R

−1 B

T N

W (t0 )−1 N

(t0 )T x

0 = −R Considering the optimal control problem from the “initial point” (t, x(t)) gives the following feed −1 (B

T (P + N

W −1 N

We note that

T ) + S T )X. back formulation of the optimal control u = −R under the conditions of the proposition there exists solutions P¯ and P to the Riccati equations that are involved in the proof. Indeed, the special structure of the system matrices imply that only the upper left blocks of P¯ and P satisfy nonlinear equations, which are identical to the ﬁrst equation in (2.6). The other blocks corresponds to p and ρ in (2.6). Hence, the existence of P¯ and P follows from Remark 2.1.

15