Control Design with Guaranteed Transient

0 downloads 0 Views 209KB Size Report
Performance via Reachable Set Estimates. Willem Esterhuizen∗ Qing-Guo Wang ∗. ∗ Institute for Intelligent Systems, University of Johannesburg(e-mail:.

Control Design with Guaranteed Transient Performance via Reachable Set Estimates Willem Esterhuizen ∗ Qing-Guo Wang ∗ ∗

Institute for Intelligent Systems, University of Johannesburg(e-mail: [email protected], [email protected]).

Abstract: We present a novel way of addressing the problem of control design with guaranteed transient performance for discrete-time linear systems. We establish a theorem that gives necessary and sufficient conditions for the state to evolve from one polyhedral subset of the state-space to another. An algorithm is then presented that, if executed successfully, produces a time-varying state-feedback control law along with a sequence of polyhedral sets within which the control law guarantees the state to evolve. The approach is demonstrated by an example involving the position control of a mass-spring-damper. We then point out various issues that may form the basis of future research. Keywords: transient performance; control design; time domain specifications; state constraints; discrete-time systems; set-theoretic methods 1. INTRODUCTION A number of researchers have addressed the problem of designing control systems that are capable of guaranteeing required transient performance. Typical adaptive control schemes, see e.g. Astrom and Wittenmark (2008), often exhibit unacceptable transients, and it is in this field that one of the first papers on shaping transient performance was produced, see Miller and Davison (1991). The authors consider single-input single-output (SISO) systems that are minimum phase and approach the problem by dynamically adjusting the controller’s feedback gains. Other works from adaptive control that consider the problem of shaping transient performance, most often in the sense of guaranteed bounds on the evolution of the state or output, include Cao and Hovakimyan (2006), Miroslav et al. (1993), Zang and Bitmead (1994), and Yan et al. (2006). Ilchmann et al. (2002) introduced the “funnel control” methodology, where time-varying constraints (called the funnel) that encapsulate the desired performance are imposed on the output. The control magnitude is then made to be dependent on the distance of the output to the funnel boundary. The approach is applicable to very general nonlinear systems that satisfy a high frequency gain condition, are of low relative degree, and have the same number of inputs as outputs. The works Hopfe et al. (2010b), Hopfe et al. (2010a) and Ilchmann and Trenn (2004) consider input constraints on the formulation and Liberzon and Trenn (2013) consider a bang-bang approach that requires the derivative of the output of high order. Another approach to guaranteed transient performance is presented in Bechlioulis and Rovithakis (2008) and Bechlioulis and Rovithakis (2009), where the authors consider nonlinear systems that are feedback linearisable, and in strict feedback form, respectively. In this method the performance requirements are also specified by imposing time-

varying constraints on the state. A transformation is then introduced that recasts this problem into an unconstrained one, the stabilisation of which solves the original problem. The paper Bechlioulis and Rovithakis (2014) utilises a similar idea, but is much simpler in that it does not require the solution of a stabilisation problem. In this paper we present a novel approach to ensuring performance. We consider a discrete-time linear system subjected to a bounded additive disturbance and impose desired time-domain performance specifications over a finite horizon. Starting from a set of initial states, we then find a time-varying state-feedback control law that drives the state to a target set, while satisfying the performance specifications. The main contribution of this paper is an adaptation of a well-known result that appears in the context of settheoretic methods, see Bitsoris (1988) and Blanchini and Miani (2015), that gives necessary and sufficient conditions for a polytopic subset of the state-space to be invariant. In our adaptation, we present a theorem that guarantees that the state evolves from an arbitrary polyhedral set to another. We then use this theorem to present an algorithm, which is executed off-line, that iteratively constructs the sought-after time-varying feedback, along with a sequence of polyhedral sets within which the state is guaranteed to evolve. The paper is organised as follows: in Section 2 we specify the system under investigation and state the problem we wish to solve. In Section 3 we outline the ideas we take towards a solution. In Section 4 we present our main theoretical results: a theorem that forms the basis of our algorithm, introduced in Section 5. In Section 6 we demonstrate our results with an example: the position control of a mass-spring-damper system. Finally, in Section 7, we conclude with a discussion and point out possible future research.

Notation If N is a matrix, then Ni refers to its i-th row. The notation N ≥ 0 will mean that every element of the matrix N is nonnegative (and not that the matrix is positive semidefinite). If r is an n-dimensional vector, then ri refers to its i-th coordinate. If both r and s are n-dimensional vectors, then the notation r ≤ s is to be interpreted element-wise, i.e. ri ≤ si for i ∈ {1, . . . , n}. The notation rT indicates the transpose of the vector r. A column vector of appropriate dimension with all its elements equal to one is given by 1. A matrix of zeros is denoted by 0, and its dimensions can be inferred from the context. The notation Rn refers to n-dimensional Euclidean space, Rn×m to the set of matrices with n rows and m columns, Z to the set of integers, and Z≥0 to the set of nonnegative integers. The acronym s.t. stands for “subject to”. A polyhedral set is specified by P(M, m) = {x ∈ Rn : M x ≤ m} where M is a matrix, with not all the elements of the row vectors Mi equal to zero, and m is a vector of compatible dimension.

Because the designer may explicitly specify the timevarying set H(k), we note that it is simple to recast traditional performance requirements for regulation and tracking problems into our formulation. For example, suppose the system (1) is the discrete-time description of a continuous-time system over an horizon [0, T ], T > 0, where a zero-order hold (ZOH) control is used. Suppose one wanted the i-th state variable, initiating at xi (0), to reach a set-point, xsp i , with a settling-time of ts and a steady-state error of λs . Moreover, suppose it is desired that the peak overshoot, xp , should occur before tp ; and that the output variable rises to within λr of xsp i with a rise-time of tr . With a sampling time of Ts , one could then use sampled versions of functions hi (t) and hi (t), like in Figure 1, where t ∈ [0, T ]. From here one could easily specify Q and φ(k).

If T is a linear transformation and S is a subset of Rn , then T S , {T s : s ∈ S}, i.e., it is the set of all vectors obtained by applying the transformation T to each element of S. Given two subsets of Rn , S1 and S2 , the Minkowski sum is given by S1 ⊕ S2 , {s1 + s2 : s1 ∈ S1 , s2 ∈ S2 }. 2. PROBLEM FORMULATION We consider the following time-invariant discrete-time linear system: x(k + 1) = Ax(k) + Bu(k) + Cv(k), (1) where k ∈ {0, 1, . . . , K} is the time index, and K ∈ Z≥0 specifies the time horizon; x(k) ∈ Rn is the state, u(k) ∈ Rm is the control, and v(k) ∈ Rp is the disturbance. The initial condition is given by x0 = x(0). The matrices A, B, and C have appropriate dimensions, and have constant elements. The notation x(ˆu,ˆv,x0 ) (k) will refer to the solution of (1) at index k, initiating from the initial condition x0 , with the particular control function u ˆ, and realisation of the disturbance, vˆ, both defined over [0, k−1]. We now introduce four polyhedral sets. With Q ∈ Rq×n , with constant entries, X0 ⊂ Rn is an initial subset specified by X0 = P(Q, ψ0 ), wherein the initial condition is located, i.e. x0 ∈ X0 , and XT ⊂ Rn is a target set: XT = P(Q, ψT ). We specify the performance requirements of the system with a time-varying set: H(k) = P(Q, φ(k)), k ∈ {0, . . . , K}, and assume X0 ⊂ H(0), and XT = H(K). We assume that the disturbance v(k) is located in a timevarying polyhedral set, given by: V(k) = P(V, γ(k)), k ∈ {0, . . . , K}, with V ∈ Rqv ×p . By using polyhedral sets in our formulation we are able to arrive at Theorem 1, its conditions expressible as a number of linear equations and inequalities which can be incorporated into convex optimisation problems.

Fig. 1. Possible time-varying constraints that may be imposed on the i-th state variable of system (1), along with a hypothetical response. The set-point is specified by xsp i , the steady-state error by λs and the λr -rise-time by tr . The peak over-shoot, xp , occurs before tp . Problem Statement Given the system (1) along with a time horizon, i.e., k ∈ {0, . . . , K}, K ≥ 0; an initial set X0 ; a target set XT ; and a time-varying set H(k) satisfying X0 ⊂ H(0), XT = H(K), find a time-varying statefeedback u ¯(k, x(k)), such that for all x0 ∈ X0 and all v that satisfies v(k) ∈ V(k), we have that x(¯u,v,x0 ) (k) ∈ H(k) for all k ∈ {0, . . . , K}. 3. OUR APPROACH The approach we take is one based on the iterative estimation of reachable sets. We need the following definition: Definition 1. (one-step reachable set). Consider equation (1) at time k along with a set S ⊂ Rn . The one-step reachable set from S via equation (1) with u(k) = uˆ(k) and v(k) ∈ V(k) is given by

R(ˆu(k),V(k)) (S) , {x ∈ Rn : ∃ˆ x ∈ S, ∃ˆ v ∈ V(k) such that x = Aˆ x + Bu ˆ(k) + C vˆ}.

(2)

Equivalently, R(ˆu(k),V(k)) (S) = AS ⊕ B u ˆ(k) ⊕ CV(k).

(3)

Our idea utilises reachable sets as follows: considering the sets X0 and H(1), find u ˜(0) such that R(˜u(0),V(0)) (X0 ) ⊂ H(1). If successful, find a u˜(1)  such that (˜ u(1),V(1)) (˜ u(0),V(0)) R R (X0 ) ⊂ H(2). We then repeat this process for all remaining k ∈ {2, . . . , K − 1}, obtaining a control sequence u ˜(k) that solves the problem. In Equation (3), we note that the choice of control merely affects the translation of the set R(ˆu(k),V(k)) (S), and that its shape is dependent on the transformation A. By specifying the control at every k as a feedback, uˆ(k, x(k)) = F (k)x(k), we are then able to more freely shape the resulting reachable set, as the expression now becomes: R(ˆu(k),V(k)) (S) = (A + BF (k))(S) ⊕ CV(k).

(4)

We now need a means of computing/estimating a onestep reachable set. There is a large body of literature concerned with computing reachable sets, see Mahler (2014) for an excellent overview, as well as the papers Asarin et al. (2000); Girard (2005), Girard et al. (2006) and Chutinan and Krogh (1998) which concern polyhedral approximations of reachable sets and “flow pipes”. We also mention the papers Kurzhanski and Varaiya (2000, 2001); Gao et al. (2007); Mitchell et al. (2005); Chen and Tomlin (2015), that find reachable sets via the solution of optimisation problems. None of these works are concerned with the problem of shaping a reachable set, via the control input, such that it is contained in another set. Recall, from above, that this forms the basis of our approach. In the next section we present a novel optimisation problem, the solution of which provides a polyhedral overestimate of a one-step reachable set that is contained in another specified polyhedral set. 4. MAIN RESULT There exists a theorem, see Bitsoris (1988) and Blanchini and Miani (2015)[Ch. 7], that gives necessary and sufficient conditions for a polyhedral set to be invariant with respect to a discrete-time linear system. We present an adaptation of this theorem which guarantees that the solution of system (1) evolves from one polyhedral set to another. We point out that the theorem may be presented in a form that does not involve a control given by u(k) = F (k)x(k), or a disturbance, but we omit this case in the interest of conciseness. Theorem 1. Consider two polyhedral sets, P(W, w) and P(Z, z), with W ∈ Rq1 ×n and Z ∈ Rq2 ×n , along with the system (1) at an arbitrary k ≥ 0 with x(k) ∈ P(W, w), v(k) ∈ P(V, γ(k)) and u¯(k, x(k)) = F (k)x(k). The following holds: R(¯u(k),V(k)) (P(W, w)) ⊂ P(Z, z) if and only if there exists a matrix G(k) ∈ Rq2 ×(n+p) that satisfies:

 W 0 = Z[A + BF (k) C], 0 V G(k) ≥ 0,   w ≤ z. G(k) γ(k) G(k)



(5) (6) (7)

With the control u(k) = u¯(k, x(k)) = F (k)x(k), the dynamic equation (1) can be re-written in a form we will use in the proof: x(k + 1) = [A + BF (k) C][x(k) v(k)]T Proof. (if) We have W x(k) ≤ w and V v(k) ≤ γ(k). If there exists a G(k) satisfying (5), (6) and  (7) then Z[A + W 0 T [x(k) v(k)]T ≤ BF (k) C][x(k) v(k)] = G(k) 0 V G(k)[w γ(k)]T ≤ z. Hence we have x(k + 1) ∈ P(Z, z) and so R(¯u(k),V(k)) (P(W, w)) ⊂ P(Z, z). (only if) We have x(k) ∈ P(W, w) and x(k + 1) ∈ P(Z, z) for an arbitrary k ≥ 0. For each i ∈ {1, . . . , q2 } consider the following linear program: µi = max Zi [A + BF (K) C][x(k) v(k)]T   W 0 [x(k) v(k)]T ≤ [w γ(k)]T . s.t. 0 V The dual problem, see for e.g. Bertsimas and Tsitsiklis (1997), is µi = min giT [w γ(k)]T   W 0 = Zi [A + BF (K) C], s.t. giT 0 V giT ≥ 0, where gi is the vector of dual variables for the i-th linear program. Let Gi (k) (a row vector) be a feasible solution to the dual problem, and let G(k) be thematrix formed by W 0 = Z[A + stacking these q2 solutions. Then, G(k) 0 V BF (k) C] and every entry of G(k) is nonnegative, so that conditions (5) and (6) are true. Noting that µi = max Zi [A + BF (k) C][x(k) v(k)]T ≤ zi and the fact that µi = Gi (k)[w γ(k)]T , we have Gi (k)[w γ(k)]T ≤ zi for each i, which implies G(k)[w γ(k)]T ≤ z, which is (7). Remark 1. We emphasise that the set P(Z, z), given the set P(W, w), is in general not the one-step reachable set, though it is guaranteed to be an over-estimate. 5. ALGORITHM We now present an algorithm that addresses the problem statement by using Theorem 1. Let X (k) = P(Q, ψ(k)) be a time-varying polyhedral set with X (0) = X0 . In what follows, the set X (k) for every k will serve as an overestimate of the reachable set.

Consider the following optimisation problem: Optimisation Problem 1. (OP1) min J(ψ(k + 1))   Q 0 = Q[A + BF (k) C], s.t. G(k) 0 V G(k) ≥ 0,   ψ(k) ≤ ψ(k + 1), G(k) γ(k) ψ(k + 1) ≤ φ(k + 1).

(8) (9) (10) (11)

The constraints (8)-(10) are taken from Theorem 1; (11) guarantees X (k + 1) ⊂ H(k + 1); and J(ψ(k + 1)) is an optional cost function. The unknowns in this problem are the elements of G(k), F (k) and ψ(k + 1), which appear linearly in (8)-(11). Our algorithm uses OP1: Algorithm 1.

1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12:

Inputs: Q, ψ0 , V , γ(k), J (ψ(k + 1)) for k ∈ {0, . . . , K − 1}; φ(k) for k ∈ {0, . . . , K} Outputs: ψ(k + 1), F (k) for k ∈ {0, . . . K − 1}, if a solution exists. for k = 0, . . . , K − 1 do attempt to find a solution to OP1 if no solution exists to OP1 then end algorithm (without success) else if a solution exists to OP 1 then save F (k) save ψ(k + 1) if k = K − 1 then end algorithm (with success) end if end if end for

Remark 2. We emphasise that Algorithm 1 is executed off-line and that the resulting time-varying feedback, u ¯(k, x(k)) = F (k)x(k), is meant to be saved and used during on-line operation. Algorithm 1 iteratively produces the set X (k) along with the feedback u ¯(k, x(k)) = F (k)x(k) such that: R(¯u(k,x(k)),V(k)) (X (k)) ⊂ X (k + 1) ⊂ H(k + 1), for all k. Thus, arguing by induction, if x0 ∈ X0 , it is guaranteed that u ¯ drives the state to the target set, while satisfying the performance specifications for all k. See Figure 2 for clarification. Remark 3. The optimisation problem can always be rendered feasible by the inclusion of q slack variables, ρi (k + 1) ≥ 0, i = 1, . . . , q as follows: replace the constraints (11) with ψi (k + 1) − φi (k + 1) ≤ ρi (k + 1) for i = 1, . . . , q, and replace the cost function with J(ψ(k + 1)) + α1T ρ(k + 1), where ρ(k+1) , (ρ1 (k+1), ρ2 (k+1), . . . , ρq (k+1))T and α is some large positive number. This makes the performance constraints “soft”. 6. EXAMPLE Consider the mass-spring-damper system:     0 0 1 u x+ x˙ = 1 −3 −3

Fig. 2. A hypothetical time-varying set, X (k) produced by a successful execution of Algorithm 1 for a planar system. The performance set, H(k), for each k is indicated with dashed boundaries. The target set, XT , is shaded and contains the final set X (K). The arrows indicate the evolution of the state, and are meant to emphasise that the feedback, F (k), ensures that for every x(k) ∈ X (k), we have x(k + 1) ∈ X (k + 1). where u is the external force applied to the mass. We will make use of a zero-order hold implementation with a sampling time of Ts = 0.1 seconds, and subject the discretised system to a disturbance v(k) ∈ [−0.03, 0.03] × [−0.03, 0.03] for k < 13, and v(k) ∈ [−0.01, 0.01] × [−0.01, 0.01] for k ≥ 13:   0.995 0.095 x(k) x(k + 1) = −0.095 0.9     1 0 0.005 v(k). (12) u(k) + + 0 1 0.095 With the initial condition in the set X0 = [−1.6, −1.7] × [0.01, 0.01], our goal is to drive the state to the target set XT = [−0.1, 0.1] × [−5, 5], imposing the following performance constraint on x1 : h1 (kTs ) ≤ x1 (k) ≤ h1 (kTs ), where the functions h1 and h1 are as shown in Figure 3, and ensure tr = 1s, λr = 0.6, ts = 2s, λs = 0.2 with no overshoot. The velocity, x2 , is fast varying. To limit its rate of change we impose the performance constraint as in Figure 3, ensuring −5 ≤ x2 ≤ 5. Thus, we identify:       0.1 −1.6 1 0  0.1   1.7   −1 0  , ψT =  , ψ0 =  D= 5  0.01  0 1  5 0.01 0 −1     1 0 h1 (kTs )  h1 (kTs )   −1 0  φ(k) =  ,V =  0 1 , 5 0 −1 5

γ(k) =



(0.03)1, k < 13 (0.01)1, k ≥ 13

We run Algorithm 1 with the cost function J(ψ(k + 1)) = 1T ψ(k + 1), aiming to penalise the size of the reachable set estimates. If a solution does not exist to OP1, we introduce slack variables, as explained in Remark 3, and use the cost function J(ψ(k + 1)) + 100(1T ρ(k + 1)). The results are summarised in Figure 3.

0.5

0

-0.5

-1

7. CONCLUSION We have presented a novel approach that produces a time-varying feedback control law that guarantees desired transient performance for discrete-time linear systems subjected to bounded disturbance. The method iteratively solves a novel optimisation problem to construct estimates of reachable sets.

-1.5

Many challenges remain. Though we have formulated the problem with the incorporation of disturbances known to lie in polyhedral sets, we do not have guarantees that a solution generated by Algorithm 1 will work when presented with unmodelled disturbances, or model uncertainties.

5

As can be seen from Figure 3 (b), the reachable set overestimates become worse with increasing time step. This is known as the “wrapping effect”, and focus should be placed on reducing the effects of this phenomenon. However, though these over-estimates may be viewed as introducing unnecessary conservativeness, it should be pointed out that they improve the robustness of the solution: the larger an over-estimate, X (k), the more probable it is that an unmodelled disturbance will cause the state to evolve into X (k), initiating from a set X (k−1). It may be desirable for the designer to keep the state in the final set for all future time, once it is reached. Research could focus on deriving stability results similar to those that exist in MPC, see for example the survey Mayne et al. (2000). A common approach is to design a separate feedback controller that renders the final set invariant. An important focus of future research should be the derivation of conditions that guarantee a solution to Optimisation Problem 1, without the inclusion of slack variables. Some final improvements that can be made include an analysis that concerns input-output models, and a means to incorporate constraints on the control.

-2 0

0.5

1

1.5

2

(a)

0

-5 0

0.5

1

1.5

2

(b) 5 4 3 2 1 0 -1 -2 -3

REFERENCES Asarin, E., Bournez, O., Dang, T., and Maler, O. (2000). Approximate reachability analysis of piecewise-linear dynamical systems. In International Workshop on Hybrid Systems: Computation & Control - HSCC’2000, volume 1790 of Lecture Notes in Computer Science, 20– 31. Springer-Verlag. Astrom, K. and Wittenmark, B. (2008). Adaptive control. Dover, 2 edition. Bechlioulis, C.P. and Rovithakis, G.A. (2008). Robust adaptive control of feedback linearizable mimo nonlinear systems with prescribed performance. IEEE Trans. Automat. Contr., 53(9), 2090–2099.

-4 -5 -2

-1.8

-1.6

-1.4

-1.2

-1

-0.8

-0.6

-0.4

-0.2

0

0.2

(c) Fig. 3. Result of successful execution of Algorithm 1, along with a simulation of system (12) with the computed feedback u ¯(k, x(k)) = F (k)x(k), (marked with x’s) with v(k) uniformly selected from P(V, γ(k)). In plot (c), we show the computed sets, X (k), in red, and the performance sets, H(k), with dashed boundaries. In plots (a) and (b) we also show the projection of the sets X (k) (marked in vertical black lines) onto x1 and x2 , respectively.

Bechlioulis, C.P. and Rovithakis, G.A. (2009). Brief paper: Adaptive control with guaranteed transient and steady state tracking error bounds for strict feedback systems. Automatica, 45(2), 532–538. Bechlioulis, C.P. and Rovithakis, G.A. (2014). A lowcomplexity global approximation-free control scheme with prescribed performance for unknown pure feedback systems. Automatica, 50(4), 1217 – 1226. Bertsimas, D. and Tsitsiklis, J. (1997). Introduction to Linear Optimization. Athena Scientific, 1st edition. Bitsoris, G. (1988). Positively invariant polyhedral sets of discrete-time linear systems. International Journal of Control, 47(6), 1713–1726. Blanchini, F. and Miani, S. (2015). Set-Theoretic Methods in Control. Systems & Control: Foundations & Applications. Birkh¨auser Basel, 2nd edition. Cao, C. and Hovakimyan, N. (2006). Design and analysis of a novel l1 adaptive controller, part ii: Guaranteed transient performance. In 2006 American Control Conference, 3403–3408. Chen, M. and Tomlin, C. (2015). Exact and efficient hamilton-jacobi reachability for decoupled systems. arXiv:1503.05933. Chutinan, A. and Krogh, B.H. (1998). Computing polyhedral approximations to flow pipes for dynamic systems. In Proceedings of the 37th IEEE Conference on Decision and Control, volume 2, 2089–2094. Gao, Y., Lygeros, J., and Quincampoix, M. (2007). On the reachability problem for uncertain hybrid systems. IEEE Trans. on Automatic Control, 52(9), 1572–1586. Girard, A., Guernic, C.L., and Maler, O. (2006). Efficient Computation of Reachable Sets of Linear TimeInvariant Systems with Inputs, 257–271. Hybrid Systems: Computation and Control: 9th International Workshop, HSCC, Santa Barbara, CA, USA, March 2931, 2006. Proceedings. Springer Berlin Heidelberg. Girard, A. (2005). Reachability of Uncertain Linear Systems Using Zonotopes, 291–305. Springer Berlin Heidelberg. Hopfe, N., Ilchmann, A., and Ryan, E.P. (2010a). Funnel control with saturation: Linear mimo systems. IEEE Transactions on Automatic Control, 55(2), 532–538. Hopfe, N., Ilchmann, A., and Ryan, E.P. (2010b). Funnel control with saturation: Nonlinear siso systems. IEEE Transactions on Automatic Control, 55(9), 2177–2182. Ilchmann, A., Ryan, E., and Sangwin, C.J. (2002). Tracking with prescribed transient behaviour. ESAIM: Control, Optim. Calculus Variat., 7, 471–493. Ilchmann, A. and Trenn, S. (2004). Input constrained funnel control with applications to chemical reactor models. Systems & Control Letters, 58(5), 361–375. Kurzhanski, A. and Varaiya, P. (2001). Dynamic optimization for reachability problems. Journal of Optimization Theory and Applications, 108(2), 227–251. Kurzhanski, A.B. and Varaiya, P. (2000). Ellipsoidal techniques for reachability analysis. In Hybrid Systems: Computation and Control, volume 1790, 202–214. Springer Berlin Heidelberg. Liberzon, D. and Trenn, S. (2013). The bang-bang funnel controller for uncertain nonlinear systems with arbitrary relative degree. IEEE Transactions on Automatic Control, 58(12), 3126–3141.

Mahler, O. (2014). Algorithmic verification of continuous and hybrid systems. URL arXiv:1403.0952. Mayne, D., Rawlings, J., Rao, C., and Scokaert, P. (2000). Constrained model predictive control: Stability and optimality. Automatica, 36(6), 789 – 814. Miller, D.E. and Davison, E.J. (1991). An adaptive controller which provides an arbitrarily good transient and steady-state response. IEEE Transactions on Automatic Control, 36(1), 68–81. Miroslav, K., Kokotovi, P.V., and Kanellakopoulos, I. (1993). Transient-performance improvement with a new class of adaptive controllers. Systems & Control Letters, 21(6), 451 – 461. Mitchell, I., Bayen, A., and Tomlin, C. (2005). A timedependent Hamilton-Jacobi formulation of reachable sets for continuous dynamic games. IEEE Trans. on Automatic Control, 50(7), 947–957. Yan, L., Liu, H., and Sun, X. (2006). A variable structure MRAC with expected transient and steady-state performance. Automatica, 42(5), 805 – 813. Zang, Z. and Bitmead, R. (1994). Transient bounds for adaptive control systems. IEEE Transaction on Automatic Control, 39(1), 451 – 461.

Suggest Documents