Improved Bounds Computation for Probabilistic

2 downloads 0 Views 228KB Size Report
branch and bound algorithm for computing a hard up- per bound on probabilistic does not work as well as it does in the worst-case computation. Even for rank-.
Improved Bounds Computation for Probabilistic  Xiaoyun Zhu [email protected] Control & Dynamical Systems 107-81 California Institute of Technology Pasadena, CA 91125 Abstract Probabilistic  is a direct extension of the structured singular value  from the worst-case robustness analysis to the probabilistic framework. Instead of searching for the maximum of a function in the  computation, computing probabilistic  involves approximating the level surface of the function in the parameter space, which is even more complex. In particular, providing a suciently tight upper bound in the tail of the distribution is extremely dicult. Previous attempts of using the standard branch and bound(B&B) technique with axially aligned cuts failed to break the intractability of the problem. The use of a single linear cut gives exact solutions to rank-one problems, and outperforms the naive B&B algorithm for problems that are close to rank-one. But for general random problems, multiple linear cuts are required. The approach presented in this paper is a mixture of the linear cut and the B&B algorithms, which combines the advantages of both methods. It greatly improves the probabilistic  upper bound on average, which is shown through numerical experiments. The tightness of the bounds can be tested by comparing the hard upper bounds with the soft bounds provided by Monte-Carlo simulations.

1 Introduction The robustness analysis is essentially uncertainty management for complex systems. Uncertainty always exists between the real physical system and the model people use to predict its behavior. With a deterministic model, ones major concern is what is the worst thing that can happen to the system with some set of uncertainty. If a probabilistic description of the uncertainty is given, the question of common interest is what is the probability of bad outcomes. The structured singular value  is a very useful tool for robustness analysis. It answers the worst-case question mathematically by assuming that the uncertainty is structured and bounded. The classes of uncertainty included in the  framework are real parameter variations and unmodeled dynamics. Although the compu-

tation of  with real uncertainty is NP-hard in general, researchers have developed many ecient algorithms to compute upper and lower bounds for (see [6] and the reference therein). In particular, the branch and bound(B&B) method has been proven to provide suf ciently tight bounds with reasonable amount of computation for average problems([4].) The probabilistic analysis is typically done by MonteCarlo simulations. Suppose a system's output or performance index is some complex function of a set of parameters, whose joint probability distribution is given, the Monte-Carlo method produces estimates of the resulting distribution of the function by generating random samples in the parameter space and evaluating the function on these samples. Con dence intervals are given with the results with no guarantee that the true probability will fall into this interval. When applied to probabilistic risk assessment, where the probability of rare events with high consequences needs to be estimated with high con dence levels, it fails to give accurate answers without excessive number of samples. In the search of an alternative technique for assessing the probability of rare events, the probabilistic  was rst proposed in [8], further de ned in [5] as an extension of the worst-case  framework to the probabilistic robustness analysis. So far the study has been focused on the simplest possible problem formulation, i.e., the purely probabilistic case. Even with this setting, the exact computation of probabilistic  is intractable in the worst case. The direct application of the standard branch and bound algorithm for computing a hard upper bound on probabilistic  does not work as well as it does in the worst-case computation. Even for rankone problems which are trivial for the worst-case , the B&B algorithm fails to give reasonably good bounds for the probabilistic . Simple analysis tells us that the key issue is how to e ectively approximate the level surface of the function. For example, with rank-one problems, the level surface is a hyperplane in the parameter space. With axially aligned cuts in the standard B&B algorithm, the number of branches required to get to certain level of accuracy grows exponentially with the number of parameters. However, a linear cut along this

hyperplane solves the problem immediately. This is the motivation for studying  with linear cuts([9]) and applying the linear cut method to computing bounds for probabilistic ([7]). The probabilistic  upper bound obtained by the linear cut method is exact for the rankone problems. For problems that are close to rank-one, the linear cut algorithm outperforms the B&B algorithm in getting tighter bounds. However, for more general random matrices, a single linear cut is usually not sucient to separate the region of bad events from the rest of the parameter set. Considering multiple linear cuts are much harder to track than a series of axially aligned cuts, it makes sense to combine the linear cut with the naive branching with axially aligned cuts to produce a more intelligent algorithm for computing bounds on probabilistic . This is exactly the idea behind the new computation scheme presented in this paper. There are some other questions relevant to the probabilistic robustness analysis. For example, how to motivate the probability distribution on the uncertainty, and how to obtain the right probabilistic description if all you have is a collection of data. There is a rich literature on these problems in statistics and in reliability engineering([1]). Other researchers have studied what is the worst-case probability distribution for uncertain parameters([2]). All these are separate issues from computing bounds for probabilistic  with given distribution of the parameters, and therefore will not be addressed here. In the next section the concept of the probabilistic  is reviewed. In section 3 and 4 the standard branch and bound algorithm and the linear cut method are explained, their performance on numerical examples are demonstrated, and their relative advantages and disadvantages are discussed. A new algorithm combining the above two approaches are presented and the numerical experiment results are shown in section 5. Some concluding remarks are given in section 6.

2 Probabilistic  2.1 Notation

The notation in this paper is fairly standard. If M is a matrix, M T denotes the transpose of M . The n  n identity matrix will be denoted by In , and I is used when the dimension is obvious. diag[A1 ;    ; An ] is a block diagonal matrix with Ai 's on the diagonal. For x 2 Rn , kxk denotes the vector 2-norm, while for M 2 Rnn , kM k stands for the induced 2-norm, e.g. kM k =  (M ), where  (M ) is the maximum singular value of matrix M. r (M ) denotes the maximum positive real eigenvalue when M is square, and r (M ) = ?1 if M has no real eigenvalues. det(M ) is

the determinant of M . For a particular uncertainty structure , B: = f 2  : kk  1g. B(0; r) is used to denote the axially

aligned hyperrectangle centered at 0 , and the vector

r; ri > 0; is the half length of the sides. The volume of a region R in Rn is denoted by V (R).

2.2 The Worst-Case  Framework

- 

y

M 

x

Figure 1: Standard Interconnected System The structured singular value  deals with the wellposedness of the standard interconnected system in Figure 1, where M represents the nominal system and  represents the uncertainty in the system. The above system is well-posed if there are no nontrivial solutions to the loop equations. The standard  framework involves 1-norm bounded  with block diagonal structure, including repeated real scalars, repeated complex scalars and complex full blocks. The worst-case nature of the  framework lies in the fact that the system in Figure 1 is guaranteed to be well-posed for  2 B if and only  (M ) < 1. The uncertainty structure we consider in this paper is much simpli ed. It only contains real scalars representing parametric uncertainties. The reason will be clear later when the probabilistic  is concerned. So the structure for  is

: = fdiag [1;    ; n] : i 2 Rg:

(1)

Here we assume that i 's are non-repeated to further simplify the notation. The results should be extendable to the repeated case.

De nition 1 For M 2 Rnn , the structured singular value  is de ned as

 (M ): = minfkk :  2 ; det(I ? M ) = 0g)?1; unless det(I ? M ) = 6 0; 8 2 , in which case  (M ): = 0. An equivalent de nition for  is given by the following equation.  (M ): (2)  (M ) = max 2B r

This equation also provides a lower bound for , which is a non-convex optimization in the unit box B. And in this particular case, the maximum is always achieved at a vertex of B. On the other hand, an upper bound for  can be computed by a convex optimization of the function  (DMD?1 ) over the set D = fD : D = diag[d1; d2;    ; dn ] > 0g. Equivalently,  (M ) = Dinf f > 0: M T DM ? 2 D < 0g: (3) 2D

hardly tractable. In the meantime, it is not clear yet how to compute hard bounds on these probabilities. As a starting point, we will focus on the purely probabilistic case, meaning  = p . And since it is hard to interpret probability distributions on unmodeled dynamics, we assume that p contains only real parametric uncertainty, as de ned in (1). To de ne probabilistic  in this purely real case, we need to rst de ne a -type function (M; ) to be consistent with the general setting. Again we choose (M; ) = r (M ) as in [8].

Unlike the lower bound, the optimum in (3) is not necessarily equal to . So there are always problems for which big gaps exist between the upper bounds and the lower bounds, which motivated the use of B&B methods to re ne the bounds.

as

2.3 The Probabilistic 

An early investigation of the probabilistic  was done in [8], in which r (M ) was chosen as the performance measure of the system with a given . Assuming that the probability distribution of  is given, the objective is to compute the resulting complementary cumulative distribution of the performance function: P (M; ) = Prfr (M )  g: (4) A more general framework for the probabilistic  was described in [5], where the uncertainty  is composed of two parts:    0 p (5) = 0  ; w

where p is the part of the uncertainty that admits a probabilistic description, and w is the rest where only the worst-case is concerned. The most natural way to think of this separation is p contains the real parametric uncertainty blocks, while w consists of the stability and performance blocks. For example, if w = z ?1I for a discrete-time system, the probability of the system achieving stability is given by (6) Pr(w (M ? p ) < 1):

  ?1 And if w = z 0 I 0 , where f is a comf plex full block and kf k  , then the probability of the system being stable and achieving the performance level is given by    Pr(w I0 10I M ? p < 1): (7)

If the Monte-Carlo simulation is to be used to estimate the probability in (6) or (7), for each sample p , a  calculation problem is involved. When a large number of samples are required, the overall computation is

De nition 2 The purely probabilistic  can be de ned P (M; ; ) = if Pr((M; )  ) = :

In the real computation, we usually pick rst, and compute the corresponding = P ( ). Note that should be less than  (M ), sinceR  (M ) = max2B r (M ). In general, P ( ) = Rb p()d, where p() is the probability density function of , so R B p()d = 1. Rb is the \region of bad events" de ned by Rb = f :  2 B; (M; )  g  B. For simplicity, we would assume that  is uniformly Rb ) . distributed in B, in which case, P ( ) = VV ((B ) Similarly, we can de ne the \region of good events" Rg = f :  2 B; (M; ) < g. Since V (Rg ) = V (B) ? V (Rb ), all we need to do is to compute the volume of either Rb or Rg . So the critical issue here is to characterize the separation of these two regions, i.e., the level surface of the function L = f :  2 B; (M; ) = g. The de nition of the purely probabilistic  in [5] is slightly di erent from the de nition used here. It is based on the invertibility of the matrix I ? M . If the boundary of singularity is de ned as S = f :  2 B; det(I ? M ) = 0g, then S divides B into two subsets and the volume of one of them corresponds to the probabilistic . However, when is fairly close to  (M ), L and S are actually the same, which means the two separations are equivalent1 . Since our main concern is the probability of rare events, we would assume that is close to the worst-case throughout this paper, so we will use S only from now on, and refer to it as the \separating surface" between Rb and Rg . Exact computation of the probabilistic  is intractable in the worst case, just like the real and mixed  computation. And since we are focused on the tail of the distribution, the Monte-Carlo simulation will be highly inecient since a large number of samples will be wasted on benign events. However, choosing (M; ) to be the same function as in the  de nition in (2) has made it possible to apply the  upper bound in (3) and 1

This is not necessarily true when is small.

the branch and bound technique to compute an upper bound on P ( ) for a given , which will be reviewed in the next section.

To compare this algorithm with the other methods that will be presented later, the BNB algorithm is tested on 80 random matrices generated in MATLAB and the results are shown in Figure 2 and Figure 3. The threshold value for the function is set at   (M ). When  (M ) is not available, use the lower bound for  (M ) instead. Set = 0:9 so that is close to the worst case. For each problem the parameter set was branched 200 times. The ratio of the soft bound to the hard upper bound is denoted by rt. The subscript 'BNB' denotes the speci c method used to computed these bounds. These notations are used for the results obtained by the other methods as well. Figure 2 shows that sbBNB decreases exponentially with the dimension of the problem, while the average hubBNB hardly varies with n. The gap between both bounds gets larger very quickly as the problem gets bigger. For n = 6 and n = 8, the hubBNB and the sbBNB are orders of magnitude apart. Note that for n = 8 only 18 data points are displayed, since slbBNB = 0 for the other two problems, which means none of the

log10(hubBNB) & log10(sbBNB)

−3

−4

−5

−7

2

4

6

8

n

Figure 2: hubBNB ('x') and sbBNB ('o') for 80 random matrices of size [2 4 6 8] in a log scale, 20 matrices for each size. The hubBNB 's are slightly shifted to the left for easiness of comparison.

0

/ hub

BNB

)

−1

−2

BNB

For the probabilistic  de ned in the previous section, it is straightforward to apply the naive branch and bound algorithm to compute an upper bound on P ( ). The following algorithm was used in [8]: divide B into a collection of smaller regions by axially aligned cuts, compute  upper bound on each region, remove a region from the collection if its upper bound is bellow

since it has no contribution to the target probability, then the total volume of the remaining regions is a hard upper bound (hub) on P ( ). Here \hard" means guaranteed. In the meantime, a soft bound (sb) was also presented to test the tightness of the hub by doing Monte-Carlo sampling in those potentially \bad" regions. By getting rid of the parameter regions that are guaranteed to be \good", this method can substantially increase the e ective sample size and therefore achieve a better estimate with higher con dence level. This algorithm is called the BNB algorithm.

−2

−6

10

Branch and bound is a general optimization technique for those problems whose bounds depend on the domain of the optimization. Its application to the bounds computation for the standard  has been proven to be successful. Although due to the NP-hardness nature of the problem, exponential growth of the computation is unavoidable in the worst situation, the typical performance of even a naive branching scheme is good enough for engineering purposes([4]).

−1

log (sb

3 The Branch and Bound (BNB) Algorithm

0

−3

−4

−5

2

4

6

8

n

Figure 3:

sbBNB for the same 80 matrices. = hub BNB Each 'o' corresponds to one problem. For n = 8 two problems have rtBNB = 0, therefore are not displayed. rtBNB

samples hit the bad region Rb. In Figure 3, the ratio of sbBNB to hubBNB decays exponentially with n. The average rtLC ?BNB is 0.8001, 0.1986, 0.0090 and 0.0004 for size 2, 4, 6 and 8 respectively, indicating the fast performance degradation with the growth of the dimension. Analysis was done in [8] to show that this decay of performance exists even in rank-one problems, which are trivial for the worst-case  computation. The characteristic of the rank-one problems is that V (Rb ) tends to zero factorially in the number of parameters, which means the bad events are extremely rare for high dimensional problems. Moreover, the separating surface S becomes a hyperplane. Essentially what the BNB algorithm does is gridding the surface using axi-

ally aligned cuts. It is conceivable that this is not an intelligent way to approximate a hyperplane. Instead, a single linear cut along the hyperplane will isolate Rb immediately. This is the motivation for the linear cut method presented in the next section.

4 The Linear Cut (LC) Algorithm The need to compute the probabilistic  upper bound for rank-one problems motivated the research of  with uncertainty in more exotic regions than the standard 1-norm bounded set, for example, spherical , elliptical ([3]) and  with linear cuts([9]). In particular,  with linear cuts deals with the real parametric uncertainty with the linear constraint jcT j  1, where  = [1 ; 2 ;    ; n ]T is the vector of the parameters, and c = [c1 ; c2 ;    ; cn ]T 2 Rn is the constant coecient vector with kck = 1.

bounds, because a single linear cut cannot separate Rb from Rg . The number of problems for which the algorithm is able to compute a hubLC is 20, 14, 4 and 4 for size 2, 4, 6 and 8 respectively. And the average rtLC for these problems are 0.9265, 0.3256, 0.0120 and 0.0021 for the 4 sizes. Even for these \easier" problems the gap between hubLC and sbLC increases very quickly with the problem size. Although random matrices may not be good representatives of real physical systems, an algorithm that suits not only rank-one alike problems is desirable, which will be the focus of the next section.

5 A Combination: The LC-BNB Algorithm

In the previous two sections we have reviewed the two methods that have been used to compute the hard upper bound and the soft bound for the probabilistic . The BNB algorithm is quite straightforward to understand and implement. The resulting branches are hyDe nition 3 For M 2 Rnn ,  with linear cuts is perrectangles in Rn . It is easy to keep track of them, to de ned as compute their volumes, and to generate random samples in them. But it is not an ecient way to approxT ? 1 ;lc (M ): = (minfjc j :  2 B; det(I ?M ) = 0g) ; imate a complex surface in a high dimensional space unless this surface happens to be aligned with the axes. unless det(I ? M ) 6= 0; 8 2 B, in which case On the other hand, a linear cut approximates the sep;lc (M ): = 0. arating surface S with a hyperplane, which is more ecient than the axially aligned cuts if the problem is An implicit method was presented in [9] to convert the close to rank-one. However, for the problems that are linear constraint on  into an algebraic constraint on far from rank-one, either multiple bad corners exist or the signal so that the implicit  upper bound for an the shape of S is complicated, then multiple linear cuts augmented system is also an upper bound for  with would be required. But one linear cut on top of another linear cuts. This upper bound can be directly applied is hard to track analytically. Moreover, before impleto compute a hard upper bound on P ( ), as shown in menting the linear cut, the worst vertex needs to be [7]. The idea follows. Find the worst vertex P of B, identi ed, which involves a worst-case  computation. and  (M ) = (M; P ). Compute the gradient g of the And to get a suciently accurate answer, a branch and function (M; ) at P , let c = ? kggk . Then use c as the bound algorithm is usually carried out. Although the coecient vector in the linear constraint and compute BNB algorithm alone is not enough to provide a good an upper bound for ;lc ( M ). This upper bound deupper bound on P ( ), it does reveal some information termines a hyperplane which separates a corner from about the distribution of the bad events in the paramthe rest of B. Since this corner contains Rb , its eter space during the branching process. Therefore, it volume gives a hard upper bound on P ( ). Again by is natural to combine the LC algorithm with the BNB Monte-Carlo sampling in this corner, a soft bound can algorithm to compute bounds for P ( ). be estimated to compare with the hard upper bound. The idea is to take advantages of both algorithms by This algorithm is called the LC algorithm. For rankusing a simple branch and bound scheme with axially one problems, the upper bounds achieved on P ( ) with aligned cuts to compute bounds on the worst-case , the LC algorithm are exact. For random problems that divide the parameter space into smaller regions to separe close to rank-one, the LC algorithm outperforms arate bad corners from each other, while using the LC the BNB algorithm in getting tighter bounds for P ( ). algorithm to compute hub and sb on P ( ) for each reHowever, since the LC algorithm implements only a gion. There are two objectives: rst, the global upsingle linear cut, it is not suitable for general random per bound (U ) and lower bound (L) on  (M ) are matrices for which the Rb tends to spread out in the close enough, i.e., U U?L  TOL; second, the ratio of parameter space. This is veri ed by testing the LC alsb to hub on P ( ) is above RTOBJ . Keep branching gorithm on the same 80 random matrices. Again set until the goal is reached, or the step number reaches = 0:9. Except for n = 2, there are always probMAXSTEP . The resulting algorithm is called the lems for which the LC algorithm fails to get any hard LC-BNB algorithm. The parameter values used are:

TOL = 0:01, RTOBJ = 0:95, MAXSTEP = 200, = 0:9. )

LC−BNB

−4

−5

log (hub

LC−BNB

−3

10

10

) & log (sb

−2

−6

−7

2

4

6

8

n

Figure 4:

hubLC ?BNB ('x') and sbLC ?BNB ('o') for the same 80 random matrices as in Figure 2. The hubLC ?BNB 's are slightly shifted to the left for easiness of comparison.

relatively small sample size in the potentially bad regions is equivalent to an e ective sample size several orders of magnitude larger in the whole set, which make it possible to track the probability of extremely rare events. The test results of the LC-BNB algorithm on the same 80 random matrices as in Section 3 are displayed in Figure 4 and Figure 5. On all the problems the LC-BNB algorithm is able to compute hub and sb. The average number of steps taken is 0.5, 64.3, 193.9 and 200 for size 2, 4, 6 and 8 respectively. For n = 2 the mean step number is smaller than 1 since for most problems a single linear cut is enough and so no branching is required, while for n = 8 the ratio of sb to hub never

LC−BNB

/ hub

LC−BNB

)

0

−1

10

Some technical details are left out since they are not crucial elements of the algorithm. One thing needs to be pointed out is that during the branching process, each call to the LC algorithm involves a Monte-Carlo sampling in the corresponding branch, which is just an intermediate step to estimate sb for that branch so that the next step can pick the particularly problematic branch to cut. A xed small sample size np = 100 is used for this estimation in each step so that the computation time is reasonable. In the end, resampling is required since to make the samples uniformly distributed in B, the sample size for each isolated region should be proportional to their volumes, as implemented in Step 8. And since a big portion of the parameter set has been excluded by the hard bound computation, a

−1

log (sb

1. Initialization: Let B0 = B = B(0; 1), v0 = 1. 2. Compute upper bound (ub0 ) and lower bound (lb0 ) for  (M ) on B0 without branching, which also gives a candidate for the worst vertex P0 . Let U = ub0, L = lb0, and = L. 3. Call the LC algorithm to compute hub0 and sb0 sb0  RTOBJ and U ?L  on P ( ). If rt0 = hub U 0 TOL, stop; otherwise, de ne the set of branches SB = fB0g, step = 0. 4. Branching: step = step + 1. If UU?L > TOL, pick B 2 SB with the maximum ub; otherwise, pick B 2 SB with the minimum hubrtv . Choose a parameter to cut, divide B by half into B1 = B(1 ; r1 ) and B2 = B(2 ; r2). Make sure that (M; 1 ) < and (M; 2 ) < ; if not, change the location of the cut until the above criterion is satis ed. Compute the vi = V (Bi ), i = 1; 2. SB = SB n fBg. 5. For i = 1; 2, compute  upper and lower bounds in Bi , get ubi , lbi , Pi . Update U , L, and . 6. For i = 1; 2, if ubi < , throw away Bi ; otherwise, call the LC algorithm to compute hubi and sbi in Bi . If hubi = 0, throw away Bi ; otherwise, sbi , S = S [ fB g. rti = hub B i i B 7. Compute current overall ratio of sb to hub: P sbthe i vi . If rt >= RTOBJ and U ?L  rt = Piihub U i vi TOL, go to Step 8; otherwise, go to step 4. 8. Compute the Poverall hard upper bound hubLC ?BNB = i hubi  vi and determine the total sample size N . For each Bi 2 SB , sample the corner isolated by the linear cut with sample size npi = hubi  vi  N , calculate P sbi , then the overall soft bound sbLC ?BNB = i sbi  vi .

0

−2

2

4

6

8

n

sbLC ?BNB for the same 80 matrices. Figure 5: rtBNB = hub LC ?BNB

Each 'o' corresponds to one problem.

Rb spreads out in the whole parameter space, it will

log10(mean(rt)) for Three Methods

0

−1

−2

−3

* −−− LC−BNB + −−− LC o −−− BNB

−4

2

4

6

8

n

Figure 6: Average rt for size [2 4 6 8] with the three di erent methods.

achieves the objective so for every problem the step number reaches the maximum, which is 200. In Figure 4 hubLC ?BNB follows quite closely the decay of sbLC ?BNB with the dimension, which means the algorithm is able to identigy most of the benign regions and exclude them from the sampling. Figure 5 shows the ratio of the two bounds for di erent sizes. The average rtLC ?BNB is 0.9938, 0.9306, 0.6402 and 0.2471 for size 2, 4, 6 and 8 respectively. For n = 2 all the problems get a ratio higher than 0:99 except one problem, for which the ratio is 0:94. For n = 8 all the problems get a ratio higher than 0:1 except for 3 of them. So the performance of the algorithm does degrade when n gets higher, but the decay is much slower than for the previous two methods. To see this, the average ratio achieved with the three di erent methods are displayed in Figure 6. Note that the average is calculated on those problems for which the corresponding rt is nonzero. It is easy to see that the performance of the LC-BNB algorithm is much greater than the other two, including the absolute value of the ratio of sb to hub and the relative decay of the ratio with the dimension of the problem.

6 Concluding Remarks This paper develops a new algorithm for computing hard upper bound and soft bound for probabilistic , based on the two existing algorithms: the branch and bound algorithm and the linear cut algorithm. It combines the advantages of both algorithms to produce more ecient computation of the bounds. The use of the axially aligned branching scheme divides the parameter set into smaller regions to improve the worst-case  and the probabilistic  computation at the same time. Meanwhile, if the region of bad events

be contained in di erent branches so that one does not interfere with another, which makes it possible for a single linear cut to isolate the bad corner in each branch. If this is not feasible for some branch, the branching simply continues, trying to turn the complex branch into easy ones. As demonstrated by the numerical examples, this algorithm provides a big improvement in the tightness of the bounds for the probabilistic . However, the ratio of the soft bound to the hard upper bound does decay with the dimension of the problem. It is foreseeable that the performance will become even worse if the dimension keeps increasing. To get tolerable performance for larger problems, the computation e ort involved will be much greater. It remains an open question how to deal with these higher dimensional problems with reasonable growth rate of the computation. Another interesting direction is how to extend this type of computation to the mixed probabilistic  case, for instance, probabilistic performance with guaranteed stability. These will be the focus of our future research.

References

[1] George Apostolakis. The concept of probability in safety assessments of technological systems. In Science, volume 250, pages 1359{1364, 1990. [2] B. R. Barmish and C. M. Lagoa. The uniform distribution: a rigorous justi cation for its use in robustness analysis. In Mathematics of Control, Signals and Systems, volume 10, pages 203{222, 1997. [3] S. Khatri and P. Parrilo. Spherical . In Proceedings of the American Control Conference, pages 2314{ 2318, 1998. [4] M.P. Newlin and P.M. Young. Mixed  problems and branch and bound techniques. In Proceedings of the IEEE Conference on Decision and Control, pages 3175{3180, 1992. [5] Khatri S. and P. Parrilo. Guaranteed bounds for probabilistic . In Proceedings of the IEEE Conference on Decision and Control, 1998. [6] P.M. Young, M.P. Newlin, and J.C. Doyle. Let's get real. In Robust Control Theory, IMA Proceedings, volume 66, pages 143{173, 1995. [7] X. Zhu. Probabilistic  upper bound using linear cuts. In Proceedings of the 14th IFAC World Congress, 1999. [8] X. Zhu, Y. Huang, and J.C. Doyle. Soft vs. hard bounds in probabilistic robustness analysis. In Proceedings of the IEEE Conference on Decision and Control, pages 3412{3417, 1996. [9] X. Zhu, S. Khatri, and P. Parrilo.  with linear cuts: upper bound computation. In Proceedings of the American Control Conference, pages 2370{2374, 1999.