Linear

1 downloads 0 Views 303KB Size Report
certain system of linear inequalities will reduce the optimal value no more than a ... Solving the dual problem in a sense explains why the optimal value is the best possible. ... in 12] how to obtain a dual solution of a pure 0-1 optimization problem by .... Sensitivity analysis for mixed 0-1 programming makes use of classical ...
Inference-Based Sensitivity Analysis for Mixed Integer/Linear Programming  M. W. Dawande J. N. Hooker

Graduate School of Industrial Administration Carnegie Mellon University, Pittsburgh, PA 15213 USA

December 1996, Revised February 1998

Abstract A new method of sensitivity analysis for mixed integer/linear programming (MILP) is derived from the idea of inference duality. The inference dual of an optimization problem asks how the optimal value can be deduced from the constraints. In MILP, a deduction based on the resolution method of theorem proving can be obtained from the branch-and-cut tree that solves the primal problem. One can then investigate which perturbations of the problem leave this proof intact. On this basis it is shown that, in a minimization problem, any perturbation that satis es a certain system of linear inequalities will reduce the optimal value no more than a prespeci ed amount. One can also give an upper bound on the increase in the optimal value that results from a given perturbation. The method is illustrated on two realistic problems.

1 Introduction A connection between sensitivity analysis and duality has long been recognized. Solution of the linear programming dual, for example, reveals the sensitivity of the optimal value to perturbations in the right-hand sides of the primal constraints. Linear programming duality can be viewed as a special case of inference duality, which provides a general approach to sensitivity analysis. In particular, it provides a practical method of sensitivity analysis for integer and mixed integer programming. This research is partially supported by U.S. Oce of Naval Research Grant N00014-95-1-0517, U.S. Air Force Oce of Scienti c Research Grant F49620-96-1-0413, and the Engineering Design Research Center at Carnegie Mellon University, an Engineering Research Center of the National Science Foundation, under grant EEC-8943164. 

1

Inference duality is based on a fundamental duality of variables and constraints in optimization problems. From one point of view, a given problem concerns what values should be assigned the variables. From the dual point of view, it concerns what may be inferred from the constraints. These two views give rise to two complementary solution approaches: search and inference. The primal problem is solved by search methods that enumerate values of the variables, as in a branching algorithm or heuristic search. The dual problem is solved by using inference methods to generate new constraints, as in constraint propagation and cutting plane methods. The goal is to infer a best possible bound on the objective function value. The two approaches can often be pro tably combined in such primal-dual methods as branch-and-cut. Solving the dual problem in a sense explains why the optimal value is the best possible. Sensitivity analysis can be viewed as part of this explanation: it examines the role of each constraint in the proof of optimality. It may reveal, for example, that certain constraints play no role at all and can be dropped, or that other constraints can be altered in certain ways without a ecting the proof. The classical duality-based sensitivity analysis for linear programming has been generalized to nonlinear and integer programming, in the latter case by analyzing how the optimal value depends on the vector of right-hand sides [1, 2, 3, 4, 5, 6, 7, 20, 21, 22, 24, 25]. But the generalization suggested here takes a di erent direction. Rather than viewing the dual solution as a numerical function of right-hand sides, it views the dual solution as encoding a proof of optimality. The classical linear programming dual can be seen as a special case of this, because the dual multipliers specify a linear combination of constraints that prove the optimal value to be optimal. But inference duality can diverge from classical dualities in a more general context. It was established in [13] that, unlike other methods of sensitivity analysis, an inference-based method applies at least in principle to any optimization problem. The aim of this paper is to adapt the approach of [13] to mixed integer programming (MILP). Existing approaches have never been widely adopted in practice, due in part to the complexity of computing and interpreting the dual solution. One goal of the present paper is to demonstrate that the inference-based approach is computationally feasible and can yield useful information at least in the case of moderate-sized problem instances. Obviously a key element of the inference-based approach is solving the dual. It was shown in [12] how to obtain a dual solution of a pure 0-1 optimization problem by examining a simple branching tree. The solution takes the form of a proof that uses resolution, a well-known inference technique. This paper generalizes this approach by combining resolution with classical linear programming duality. It shows how to extract a dual solution for any mixed integer/linear problem, even when the branching tree uses linear relaxations, bounds, and certain common classes of cutting planes. 2

As presented here, sensitivity analysis of a minimization problem consists of two parts. One part, which we refer to as the dual analysis, determines how much the problem can be perturbed without decreasing the objective function value more than a speci ed amount. The other part, the primal analysis, gives an upper bound on how much the objective function will increase if the problem is perturbed by a given amount. The dual analysis is the focus of this paper, because it requires solution of the inference dual. The primal analysis is straightforward, as it can be obtained by solving linear programming problems at feasible leaf nodes of the search tree. It will be only brie y discussed. The two parts of the analysis are asymmetric, but this seems to follow naturally from the asymmetry of primal and dual solutions. One part of the analysis uses a certi cate of feasibility for the dual problem (i.e., a proof of optimality), and one uses a certi cates for feasibility for the primal problem (feasible leaf nodes of the search tree). The analysis proposed here may be contrasted with that of Schrage and Wolsey in [20]. Their method computes a piecewise linear value function that provides a lower bound on the optimal value that results from perturbing the right-hand sides a given amount. The computation involves the repeated nesting of minima and maxima of linear functionals obtained by solving linear programming duals at nodes of the search tree. Their analysis also provides a bound on the objective function coecient of a proposed new problem variable, above which the variable will not enter the optimal solution. The analysis presented here is more general, in that it permits perturbations of any of the problem data, including the objective function and constraint coecients as well as the righthand sides. It also avoids computation of a value function. Rather, it provides a system of linear inequalities, derived from information collected at leaf nodes of the search tree, that is satis ed by any allowable set of perturbations. One can set any desired subset of perturbations to zero and obtain upper and lower bounds on any remaining perturbation by solving two linear programming problems. Inference duality has other applications. For example, it permits a generalization of Benders decomposition to any optimization problem. Benders cuts are a special case of \nogoods," a wellknown idea in the constraint satisfaction literature [23], but they exploit problem structure in a way that nogoods generally do not. Logic-based Benders decomposition is developed in [12]. Other connections between logical inference and optimization are discussed in [9, 11, 14]. The paper begins with an explanation of the inference dual and how it specializes to the classical dual in the case of linear programming. After a brief introduction to propositional logic and the resolution method of theorem proving, it illustrates the basic idea of inference-based sensitivity analysis in a simple 0-1 programming example. Afterwards, propositional logic is generalized to a logic of discrete variables, to provide the basis for general MILP, and the logical properties of 3

mixed integer inequalities are established. Then, the sensitivity analysis is formally developed, on the assumption that the primal problem is solved by branch-and-bound without cutting planes. It is then applied to a capital budgeting problem and a supply chain problem in order to investigate whether it yields useful sensitivity ranges. An extension of the method is then presented that accommodates rank-1 Chvatal cuts, of which Gomory fractional cuts are a special case, and covering inequalities. The paper concludes with a summary and suggestions for further research.

2 The Inference Dual Consider a general optimization problem, min z = f (x) s.t. x 2 S

(1)

x 2 D:

The domain D is distinguished from the feasible set S . For instance, D might be vectors in Rn or 0-1 vectors. The feasible set S is de ned implicitly by a set of constraints that x must satisfy. Let P and Q be two propositions about x; that is, their truth or falsehood is determined by the value of x. P implies Q with respect to a domain D (notated P D! Q) if Q is true for any x 2 D for which P is true. The inference dual of (1) is max z s.t. x 2 S D! f (x)  z

(2)

So the dual seeks the largest z for which f (x)  z can be inferred from the constraint set. Let the optimal value of a minimization problem be respectively 1 or 1 when the problem is infeasible or unbounded, and vice-versa for a maximization problem. With this convention, it is easy to see that the optimal value of the primal (1) is equal to the optimal value of the dual (2). If z  is the optimal value of the primal problem (1), then solving the dual (2) is tantamount to constructing a proof of f (x)  z  using the constraints as premises. This requires inference rules that are complete in a relevant sense; i.e., they provide a way to infer any valid implication of the form f (x)  z from the type of constraints that appear in the problem. The classical linear programming dual is a special case. Valid inferences are obtained from the constraints by taking nonnegative linear combinations of them. The completeness of this rule is essentially the content of the classical separation lemmas for polyhedra. As noted earlier, dual solution encodes a proof of the optimal value because it exhibits the desired linear combination. 4

Note that if a proof of f (x)  z  is obtained, where z  is the optimal value, the same proof shows that f (x)  z  z for z  0. This is important in sensitivity analysis, as one often asks what perturbations do not reduce the optimal value below z  z for some given z .

3 Logical Clauses and Resolution Sensitivity analysis for mixed 0-1 programming makes use of classical propositional logic, whereas a slightly more elaborate logic is required for general mixed integer programming. A brief introduction to propositional logic is given here, to provide background for a simple 0-1 example in the next section. Afterwards the more elaborate logic is presented, so that sensitivity analysis can be developed for general MILP. Propositional logic consists of formulas that are composed of atomic propositions xj and various logical connectives, such as `and' (^), `or' (_), `not' (:), etc. The truth value of a formula is determined by the truth values of the atomic propositions it contains. For example, one way to make the formula F = (x1 ^:x2 ) _:(x2 ^ x3 ) true is make the atomic proposition x1 true and x2 ; x3 false, written (x1 ; x2 ; x3 ) = (1; 0; 0). One formula F1 implies another F2 when all 0-1 assignments (x1 ; : : : ; xk ) = (v1 ; : : : ; vk ) that make F1 true also make F2 true. A logical clause is a disjunction of literals, which are atomic propositions or their negations. For eample, the conjunction, (x1 _ :x2 _ :x3 ) ^ (:x2 _ :x3 ), of two clauses is equivalent to the formula F above. Actually the rst clause can be dropped without e ect, because it is implied by the second. Clause C1 implies clause C2 if and only if C1 absorbs C2 , which is to say that all the literals of C1 occur in C2 . The empty clause contains no literals and is necessarily false. Quine [18, 19] showed that a simple inference method, now called resolution, derives all implications of a given set of clauses. If two clauses have the property that exactly one atomic proposition xj occurs positively in one and negatively in the other, their resolvent (on xj ) consists of the disjunction of all literals in either clause except xj and :xj . For instance, the clauses x1 _ x2 _ :x3 and :x1 _ :x3 _ x4 resolve to the clause x2 _ :x3 _ x4 . The resolution algorithm is applied to a clause set S as follows. Identify a pair of clauses in S having a resolvent R that is absorbed by no clause in S , remove from S all clauses absorbed by R, and add R to S . Continue until no such pairs remain. Quine showed that the algorithm is complete: if it begins with S and terminates with S 0 , then any clause that is implied by S is absorbed by some clause in S 0 . In particular, S is unsatis able if and only if S 0 contains the empty clause.

5

4 A 0-1 Example The role of resolution in sensitivity analysis is best explained via a simple example in 0-1 programming. A dual analysis will be performed rst to determine how much the problem can be changed with decreasing the optimal value more than a speci ed amount. Then a brief primal analysis will derive an upper bound on the increase in the optimal value that results from a given change in the problem. The basic strategy of the dual analysis is a) to solve the inference dual by nding a proof of the optimal value z  , b) to hold the proof schema xed, and then c) to investigate under what data perturbations the dual solution provides a valid proof that the optimal value is at least z  z . In linear programming, the proof schema is the vector u of dual multipliers. In 0-1 programming, it has two parts: the inference of logical clauses from some of the constraints and/or objective function, followed by a resolution proof that these clauses are inconsistent. In general, a 0-1 linear programming problem, minfz = cx : Ax  a; x 2 f0; 1gg has the 1g inference dual maxfz : Ax  a f0;! cx  zg. A separation lemma for this dual would consist of a complete inference method for linear 0-1 inequalities. Such a method was presented in [10]. For present purposes, it suces to reconstruct a proof from a simple branching tree that solves the primal problem. Consider, for example, the problem, n

min 3x1 + 5x2 + 7x3 s.t. 2x1 + 5x2 x3  3 (a) x1 + x2 + 4x3  4 (b) x1 + x2 + x3  2 (c) x 2 f0; 1g3 :

(3)

The enumeration tree of Fig. 1 solves the problem. The optimal solution value is 12. A proof that z  12 z (for a given z  0) can be constructed by rst associating with every leaf node an inequality that is violated at that node. With each infeasible leaf node associate one of the violated constraint inequalities. (In Fig. 1, there is a choice of two violated inequalities at three of the nodes.) With each feasible leaf node associate the inequality cx  z^ z , where z^ is the objective function value at that node. This inequality is obviously violated. The violated inequalities are shown at the leaf nodes in Fig. 2. Note that inequalities of the form cx  z^ z  are written cx   z^ + z . Because the search tree is exhaustive, the inequalities at the leaf nodes must be inconsistent. A proof of their inconsistency proves the bound z  12 z and therefore solves the inference dual. For each violated inequality it is easy to identify a clause that it implies and that is falsi ed 6

(a),(b),(c)

x1 =1



(a),(b),(c) x2 = 1 QQx2 = 0



x3 = 1 z = 15

PPP x1 = 0 PPPP P

(a),(b),(c)

x2 = 1

QQ

(b)

@ x3 = 0 @@

(a) ,(b)

(b)



x3 = 1

(b),(c)

z = 12

QQx2 = 0 QQ

@ x3 = 0 @@

(a) ,(c)

(b) ,(c)

Figure 1: Solution of a 0-1 problem by branching. The constraints remaining (i.e., not yet satis ed) at each nonleaf node are indicated. The constraints violated at each leaf node, if any, are indicated with an asterisk; if no constraints are violated, the objective function value z is shown. at that leaf node. (The will be established by Corollary 1 below.) The inequality at node 8, for example, implies :x1 _:x2 _:x3 , which is violated at node 8. Violated clauses are indicated at the leaf nodes in Fig. 2. The violated clauses at the leaf nodes must be inconsistent, because the search tree is exhaustive. Due to the completeness of resolution, there is a resolution proof that derives the empty clause. Such a proof can be read directly from Fig. 2. The violated clauses at the leaf nodes 8 and 9, for instance, resolve on x3 to obtain :x1 _:x2 at node 4. The resolution takes place on the branching variable that produces nodes 8 and 9. The resolvent at node 4 resolves with x2 , which is violated at node 5, to obtain :x1 at node 2. Eventually the empty clause is produced at the root node. The resolution proof itself is not needed for sensitivity analysis. It is necessary to know only what its premises are (i.e., which falsi ed clauses are used at the leaf nodes). This is because the resolution proof continues to go through when the problem data are changed, so long as its premises are still available; i.e., so long as the inequality associated with each leaf node continues to imply the falsi ed clause at that node. This provides the basis for a straightforward form of sensitivity analysis. For instance, constraint (c) is associated with no leaf node in Fig. 2. The dual proof retains its premises even if (c) is dropped from the problem. Constraint (c) is therefore redundant and can be deleted without changing the optimal solution. 7

Next consider constraint (b), which is associated with leaf nodes 9 and 11. The falsi ed clause is :x1 _ :x2 _ x3 at node 9 and x1 _ :x2 _ x3 at node 11. So (b) can be altered in any fashion such that it continues to imply these two clauses, without invalidating the bound z  12. Suppose that (b) is changed to ( 1 + b1 )x1 + (1 + b2 )x2 + (4 + b3 )x3  4 +  . It is easily seen (and is established in Lemma 2 below) that the perturbed inequality implies :x1 _ :x2 _ x3 if and only if ( 1+b1 )+(1+b2 ) < 4+ . Similarly, it implies x1 _:x2 _ x3 if and only if (1+b2 ) < 4+ . So the perturbation b;  is allowed if it satis es the linear system, b1 + b2 < 4 +  ; b2 < 3 +  :

(4)

Thus if b1 (now 1) is perturbed and the other data are held xed, the allowable range for b1 can be obtained by maximizing and minimizing b1 subject to (4) with b2 =  = 0. The ranges for bj and  are,

1 < b1 < 4;

1 < b2 < 3;

1 < b3 < 1;

3 <  < 1:

(5)

One might question the range for b3 by observing that the problem becomes infeasible if b3 is reduced by more than one. But this change does not reduce the optimal value of the problem; on the contrary, it increases it to 1. Possible increases in the optimal value are investigated in the primal analysis below. The computed sensitivity range for a coecient can be a proper subset of the maximum allowable range. (The same is of course true in classical linear programming sensitivity analysis.) The maximum ranges for bj and  , for example, are

1 < b1 < 4;

1 < b2 < 5;

1 < b3 < 1;

4 <  < 1:

The maximum ranges for b2 and  are larger than the computed ranges in (5). The objective function can be treated as follows. Once the tolerance z is speci ed, the objective function can be changed to any function (3 + c1 )x1 + (5 + c2 )x2 + (7 + c3 )x3 so that (3 + c1 )x1 (5 + c2 )x2 (7 + c3 )x3   15 + z continues to imply :x1 _ :x2 _ :x3 , and (3 + c1 )x1 (5 + c2 )x2 (7 + c3 )x3   12 + z continues to imply x1 _ :x2 _ :x3 . This is ensured so long as c1 + c2 + c3  z; c2 + c3  z: A simple primal analysis derives an upper bound on the objective function value that results from a change in the problem data. Suppose for example that the term x2 is removed from constraint (c), resulting in the constraint x1 + x3  2. The objective function value for the feasible solution x = (1; 1; 1) at node 8 remains 15. The value at the other feasible node, node 10, rises from 12 to 1, because constraint (c) is now violated by the solution x = (0; 1; 1) at that node. The 8

Node 1

; PPP    PPP    Node 2 Node 3 :x1

!! @ ! ! ! @Node 5

Node 4

:x1 _ :x2

aaa

x

Node 6

2x1 + 5x2 x3  3 (a) (:x1 _ x2 )

aa

Node 9

1 + x2 + 4x3  4 (b) (:x1 _ :x2 _ x3 )

Node 8 3x1 5x2 7x3   15 + z (:x1 _ :x2 _ :x3 )

1

aaa

aa

Node 7 x

x

J JJ







1 _ :x2 2x1 + 5x2 x3  3 (a) (x1 _ x2 )

x

Node 11 + x 1 2 + 4x3  4 (b) (x1 _ :x2 _ x3 )

Node 10 3x1 5x2 7x3   12 + z (x1 _ :x2 _ :x3 )

Figure 2: Construction of a proof of z  12 z (for z  0). The violated constraint at each leaf node is shown, along with a falsi ed clause it implies. Resolvents are shown at the nonleaf nodes.

9

new optimal value of the problem is therefore at most 15. In general an upper bound is obtained by taking the minimum of the new objective function values at the nodes that are feasible in the original solution.

5 Logic of Discrete Variables A logic of discrete variables is needed to carry out inference-based sensitivity analysis for general mixed integer programming. It treats x1 ; : : : ; xk as multivalent discrete variables rather than logical propositions. The value of each xj must belong to a nite domain Dj , which in the present case will be a set of integers f0; 1; : : : ; hj g. Clauses, which might be called multivalent clauses, have the form _k Ci = (xj 2 Xij ); (6) j =1

where each Xij  Dj . (It will be convenient to index clauses by i as shown.) When every domain contains two elements, these reduce to the clauses of propositional logic. For example, if D1 = D2 = D3 = f0; 1g, the clause (x1 2 f1g) _ (x2 2 f0g) _ (x3 2 ;) can be regarded as equivalent to the the classical clause x1 _ :x2 . (Although the literal x3 2 ; can be dropped, it is convenient to suppose that all the discrete variables are formally present in any given multivalent clause.) In general the truth value of a formula F in a logic of discrete variables is determined by the values of the variables it contains. Formula F1 implies F2 if any assignment (x1 ; : : : ; xk ) = (v1 ; : : : ; vk ) with each xj 2 Dj that makes F1 true also makes F2 true. A set S of formulas implies G if the conjunction of formulas in S implies G. Two formulas are equivalent if they imply each other. Clause Ci absorbs Ci0 if Xij  Xi0 j for all j . As before, Ci implies Ci0 if and only if Ci absorbs Ci0 . The empty clause, which has Xj = ; for all j , implies every clause. A partial assignment to the variables has the form xj 2 Vj ; j = 1; : : : ; k; (7) where each Vj  Dj . The partial assignment (7) can be said to falsify a formula F when every assignment (x1 ; : : : ; xk ) = (v1 ; : : : ; vk ) with each vj 2 Vj falsi es F . From the de nition of implication,

Lemma 1 A partial assignment (7) falsi es a formula F if and only if (7) falsi es some clause

implied by F .

Every partial assignment A given by (7) can be uniquely associated with the weakest clause that 10

it falsi es, namely

CA =

_k

(xj 62 Vj ):

j =1

(8)

For instance, the partial assignment x1 2 f1; 2g; x2 2 f2; 3g is associated with the clause (x1 62 f1; 2g) _ (x2 62 f2; 3g):

(9)

Every clause violated by the assignment absorbs (9). The resolvent on xj of a set fCi j i 2 I g of multivalent clauses is \ ! _ [ ! i2I

Xij _

t6=j i2I

Xit ;

where Ci is given by (6). For example, the three clauses below resolve on x1 to produce the fourth. (x1 2 f1; 4g) _ (x2 2 f1g) (x1 2 f2; 4g) _ (x2 2 f2; 3g) (x1 2 f3; 4g) _ (x2 2 f1g) (x1 2 f4g) _ (x2 2 f1; 2; 3g) The resolution algorithm is applied to a clause set S as follows. Find a subset of clauses in S that have a resolvent R that is absorbed by no clause in S . Remove from S all clauses absorbed by R, and add R to S . Continue until no such subset exists. The following is proved in [12].

Theorem 1 If the resolution algorithm is applied to a clause set S , it terminates with set S 0, and

every clause implied by S is absorbed by some clause in S 0 . In particular, S is satis able if and only if S 0 contains the empty clause.

6 Logical Properties of Inequalities A mixed integer/linear inequality bx  can be regarded as a proposition in the logic of discrete variables. Let x1 ; : : : ; xk be integer variables and xk+1 ; : : : ; xn continuous variables, and suppose xj 2 f0; 1; : : : ; hj g for j = 1; : : : ; k and 0  xj  hj for j = k +1; : : : ; n. (Set hj to in nity if there is no upper bound on continuous variable xj .) A integer assignment (x1 ; : : : ; xk ) = (v1 ; : : : vk ) makes bx  true if (x1 ; : : : ; xn ) = (v1 ; : : : ; vn) satis es bx  for some assignment to xk+1; : : : ; xn satisfying 0  vj  hj for j = k + 1; : : : ; n. An assignment that does not make bx  true makes it false (i.e., violates it). Sensitivity analysis is concerned with partial assignments that are imposed by the branch cuts in an search tree. When one branches on a variable xj that, in the solution of the continuous 11

relaxation, has a fractional value between two consecutive integers v and v +1, the left child imposes a branch cut xj  v, and the right child imposes xj  v + 1. So the branch cuts that are in e ect at any given node make a partial assignment of the form,

xj 2 fvj ; vj + 1; : : : ; vj g; j = 1; : : : ; k;

(10)

where vj ; vj are integers and belong to f0; 1; : : : ; hj g for j = 1; : : : ; k. The weakest falsi ed clause (8) associated with a partial assignment A of this form is _k CA = (xj 62 fvj ; : : : ; vj g): (11) j =1

A useful necessary and sucient condition for when an inequality implies a clause of the form (11) can be given as follows.

Lemma 2 bx  implies CA if and only if there exist b1 ; : : : ; bn such that n X

j =1

bj vj + bj (vj vj ) < ;

(12)

bj  bj ; bj  0; j = 1; : : : ; n; where (vj ; vj ) = (0; hj ) for j = k + 1; : : : ; n. Proof. When CA is false the largest possible value of bx is clearly the left-hand side of

X

j : b 0

bj vj < :

(13)

j

So bx  implies CA if and only if (13) holds. But (13) implies that (12) holds for for some b1 ; : : : bn because one can set bj = maxf0; bj g. Also (12) implies (13) because the left-hand side of (12) is equal to the left-hand side of (13) plus X X  bj vj + bj (vj vk ); j : b >0 j

j : b 0. The following constraint set is therefore infeasible. (A)x  a; Bx  b (18) 0xh Because Bx  b; 0  x  h enforces the branch cuts, the surrogate inequality (A)x  a is violated at the current node. 2. The solution of (17) is feasible in (15) and its value z^ is better than the current bound z. Then the following constraint set is infeasible for any z  0.

cx   z^ + z Ax  a; Bx  b 0xh

(19)

If (; ;  ) is the optimal dual solution of (17), the multipliers (1; ; ;  ) prove infeasibility of (19). This means that the following constraint set is infeasible for any z  0. (A c)x  a z^ + z + 

Bx  b 0xh

Thus the surrogate (A c)x  a z^ + z +  is violated at the current node.

14

(20)

3. (17) is feasible, and its optimal value is greater than or equal to z. This implies that the following system is infeasible for any z  0.

cx   z + z Ax  a; Bx  b 0xh

(21)

As before, the dual multipliers (1; ; ;  ) prove infeasibility, and the surrogate (A c)x  a z + z +  is violated at the current node. The solution of the inference dual (16) consists of the inference of a falsi ed clause from the surrogate associated with each leaf node, followed by a resolution proof that the clauses are inconsistent.

8 Sensitivity Analysis for Mixed Integer Programming Let z  be the optimal value of (15), and suppose that (15) is perturbed as follows. min z = (c + c)x s.t. (A + A)x  a + a 0xh xj integer; j = 1; : : : ; k:

(22)

The aim is to determine under what conditions the bound z  z  z remains valid for (22). A sucient condition for its validity is that the dual solution remains a valid proof schema for z  z z. This schema consists of the vectors  of dual variables used to obtain a surrogate at each leaf node, plus a resolution proof that the falsi ed clauses are inconsistent. The proof schema exists by virtue of Theorem 1. The schema remains valid so long as the  at each leaf node continues to produce a surrogate that implies the falsi ed clause at that node. Let p be the vector  of dual multipliers at leaf node p. The surrogate at node p is (p A p0 c)x  p a zp + zp + ; which can be written where

qpx  pa zp + zp + ; qp = pA p0c: 15

(23)

Here p0 is 0 if the relaxation (17) is infeasible at node p and 1 otherwise. zp zp is  if (17) is infeasible, z^ z if the solution of (17) is feasible in (19), and z z if the tree is pruned at node p by the value z of the incumbent solution. Suppose that Ap, given by

xj 2 fvpj; : : : ; vjpg; j = 1; : : : ; k;

(24)

is the partial assignment made at node p, due to branch cuts added between p and the root node. By Corollary 1, the violated surrogate (23) at node p implies the clause,

CA = p

_k

(xj 62 fvpj ; : : : ; vjp g);

(25)

j =1

which is falsi ed by Ap . After the problem is perturbed, the surrogate at a leaf node p becomes (qp + qp )x  p (a + a) zp + zp + ;

(26)

where

qp = p A p0 c: Then the bound z  z  z is valid so long as (26) implies CA at every leaf node p. Using Lemma 2, this is the case if there are q1p; : : : ; qnp that satisfy: p X (qjp + qjp)v pj + qjp(vjp vpj )  p (a + a) zp + zp: (27) j =1 p p p p qj  qj + qj ; qj  0; j = 1; : : : ; n; p

where (v pj; vjp ) = (0; hj ) for j = k + 1; : : : ; n. So we have,

Theorem 2 Suppose that (15) is perturbed as in (22). Then the optimal value of (15) decreases at

most z if the perturbation satis es the linear system consisting of the inequalities (27) for every leaf node p of the search tree.

In particular this provides sensitivity analysis for each constraint Ai x  ai in the system Ax  a. Aix  ai plays a role in the proof at each leaf node at which pi > 0. If Aix  ai is replaced by a perturbed constraint (Ai + Ai )x  ai + ai , then qp = pi Ai . So from (27), the optimal value decreases at most z if, for each leaf node p with pi > 0, there are qjp for j = 1; : : : ; n that satisfy the following: n X (qjp + pi Aij )vpj + qjp (vjp vpj )  pa + piai zp + zp j =1 qjp  qjp + piAij ; qjp  0;

j = 1; : : : ; n: 16

Using the change of variable qjp = spj + qjp , this can be written, n n X X p p i Aij vj + spj(vjp vpj) piai  rp where

j =1 j =1 p p sj  i Aij ; spj 

rp =

qjp;

j = 1; : : : ; n;

(28)

n X qjpvjp + pa zp + zp:

j =1

A perturbation c of the objective function can be similarly analyzed. From (27), the bound z  z z remains valid if, for each leaf node p with p0 = 1, there are sp1 ; : : : ; spn that satisfy the following: n X cj vpj spj(vjp vpj )  rp (29) j =1 spj  cj ; spj  qjp; j = 1; : : : ; n: To summarize,

Rule 1. If z is the optimal value of (15), then the bound z  z z remains valid when constraint Ai x  ai is perturbed to (Ai + Ai)x  ai + ai, provided that there are sp1; : : : ; spn that satisfy the linear system consisting of the inequalities (28) for each leaf node p with pi > 0.

Rule 2. If z is the optimal value of (15), then the bound z  z z remains valid when the

objective function is perturbed to (c + c)x, provided that there are sp1 ; : : : ; spn that satisfy the linear system consisting of the inequalities (29) for each leaf node p with p0 = 1.

The linear systems can be used to obtain sensitivity ranges for individual coecients cj ; Aij and right-hand sides ai . The above analysis is based on one proof of optimality, namely, the branch-and-bound tree available to us. Other trees may provide di erent analyses. This is strictly analogous to degeneracy in linear programming. Degeneracy is not a technical nuisance but a natural result of the fact that several di erent rationales (proofs) can be given for an optimal solution. Every sensitivity analysis is conservative, however, in the sense that any constraint found to be redundant under one analysis is redundant simpliciter,and every allowable perturbation under one analysis is allowable under any analysis.

9 Primal Analysis A simple primal analysis obtains an upper bound on the optimal value that results from a given perturbation of the problem data. Let F be the set of nodes of the search tree at which feasible 17

solutions were found for the original problem. For each node p 2 F consider the linear programming problem, min z = (c + c)x s.t. (A + A)x  a + a

Bx  b 0xh xj = xj ; j = 1; : : : ; k;

(30)

(31)

where B; b are as in (17), and x1 ; : : : ; xk are the integral solution values of x1 ; : : : ; xk at node p. Then if zp0 is the (possibly in nite) optimal value of (30) at node p, minp2F fzp0 g is an upper bound on the optimal value of the perturbed problem (22).

10 Implementation Issues The procedure described in the previous section is straightforward to implement within a branchand-bound framework. Deriving a surrogate inequality which is violated by the partial assignment at a leaf node requires the following: (i) the bounds on the variables that are restricted by branch cuts at that leaf node, and (ii) the dual solution vector corresponding to the original problem constraints Ax  a. This information is easily available. In the worst case, the total number of inequalities in the systems (28) and (29) isnjP j where jP j is the total number of leaf nodes. The worst case happens when the inequality under consideration plays a role at every leaf node (that is, the corresponding i > 0 at every leaf node). If the inequalities which are to be investigated during sensitivity analysis are not known beforehand, then the part of the dual solution corresponding to the original constraints Ax  a must be stored for every leaf node. However, if the constraints of interest are known in advance, only the leaf nodes at which the corresponding dual variables are nonzero are relevant. In the two problems discussed in the next section, the average fraction of leaf nodes that are relevant are 0.5083 and 0.4547 respectively. Regardless of whether the constraints of interest are speci ed in advance, the tolerance z on the optimal value need not be xed until the linear system used to compute sensitivity ranges is already set up. The user can interactively try di erent values of z and obtain the corresponding ranges.

18

11 Two Examples We now illustrate inference-based sensitivity analysis on a pure integer program and a mixed integer program. The problems are chosen to show that (a) the procedure is practical at least on moderatesized problem instances, and (b) it can yield useful results on realistic problems. The maximization objectives have been converted to minimization, to apply the above analysis.

11.1 Capital budgeting Allocating funds to independent R & D projects is a problem of practical importance for many rms. A speci c project selection problem of interest is that of optimal allocation of limited funds to preproposal research activity, marketing e ort and proposal preparation [16, 17]. The 0-1 model is, maxfcx : Ax  b; xj 2 f0; 1g for j = 1; :::; ng where cj is the contract volume of the j th project, xj = 1 if the project is selected and 0 if not, A is the matrix of estimated costs and b is the vector of available budget amounts. We solve the largest of 7 capital budgeting problems proposed in [17] and available from ORLib maintained by J.E. Beasley. This problem has 50 projects and 5 budget constraints. Some constraints for this problem are shown in Table 1. All the entries are in thousands of dollars.

 The coecients of constraints 2,3 and 4 are shown in columns 2,5 and 8 of Table 1 respectively. The right-hand sides of these constraints are indicated in the last row. Within what ranges can a cost coecient aij be changed without decreasing the optimal solution? Columns 3,6 and 9 indicate, respectively, such ranges for the coecients of constraints 2, 3 and 4. The last row in these columns similar ranges for the budget (right-hand side). For example, the coecient a22 can be changed from its present value of 92 to any value greater than or equal to 85.65 and the optimal value does not decrease. Similarly, budget b4 can be changed from its present value of 550 to any value below 719.47 without decreasing the optimal value. The cost function is shown in column 11, where each cj is the contract volume for the j th project. Column 12 indicates the ranges of the perturbations by which the contract volume of the j th project can be changed without decreasing the optimal value.

 Another interesting question is the amount of perturbation allowed under the condition that

the optimum objective function value does not decrease more than, say, 5%. Because the optimal solution value is z  = 16537, we set z = 826:85. Columns 4, 7, 10 and 13 contain the resulting sensitivity ranges for the coecients of constraints 2, 3, 4 as well as the cost function. The last row pertains to the right-hand sides. As mentioned previously, the actual sensitivity ranges might be wider than those shown in Table 1. For example, the ranges of the perturbations allowed in coecients a2;7 and a3;29 19

so that the optimum solution does not decrease, according to our analysis are [-1, 4.08] and [-1, 0.79] respectively while the actual ranges (obtained by experimenting with the integer program) are [-1, 8.00] and [-1, 2.00] respectively.

11.2 Multi-product Shipment Models This example is taken from [8]. A common problem faced by distributors involves supplying multiple products, from inventory, to multiple customers who order sets of products. Orders that are not entirely lled from current inventory must be completed with a second shipment. Customers restrict the quantities that they are willing to accept if the initial shipment is not complete. Due to contractual arrangements, the revenue received from supplying a fraction of the total demand of a product varies from one customer to another even if the unit price of the product is the same for all customers. This measures, in some sense, the urgency of the demand to the customer. The objective is to maximize total revenue. In the absence of shipping restrictions, this problem can be formulated as a linear program. However, this can lead to solutions in which the initial shipment lls a very small percentage of the customer's total demand. This has at least two drawbacks for the distributor: a) a small shipment does not warrant the additional transportation cost, and b) it results in a loss of goodwill with the customer as it results in additional paperwork and tracking costs. In addition, a large but incomplete initial shipment, for example one that meets 90% of demand, is undesirable because it necessitates a small second shipment. Integer Programming formulation: We are faced with demands for M products from N customers. Customers 1; 2; :::; N1 must be supplied either their full demand or none at all. Customers N1 + 1; :::; N must be shipped either 0% or between 100 1 % and 100 2 % or 100% of their total demand. Let Dji be the demand for product j from customer i. We are to decide the portion DS of Dji that will be satis ed from inventory Ij , for product j . We receive reward Ki if an i j i j

immediate shipment is made to customer i and revenue rji DS for supplying product j to customer i. Customer i receives a partial shipment when w1i = 1, a full shipment when w2i = 1, and nothing from inventory when w1i = w2i = 0. N N X M Si X X max K i(w2i + w1i ) + rji ji i j i j

s.t.

i=1 M X

i=1 j =1

Dj

n X i Sj = ( Dji )w2i ; 1  i  N1 j =1 j =1 n M n X X X 1 ( Dji )w1i  Sji  ( Dji )( 2 w1i + w2i ); j =1 j =1 j =1

20

N1 + 1  i  N

21

j

a2j

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50

16 92 41 16 150 23 4 18 6 0 12 8 2 1 0 200 20 6 2 1 70 9 22 4 1 5 10 6 4 0 4 12 8 4 3 0 10 0 6 28 93 9 30 22 0 36 45 13 2 2 650

RHS

 2j z = 0 z = 0 05  [-1,1.08] [-1, 120.71] [-1,6.35] [-1,120.71] [-1,18.15] [-1,116.34] [-1,1.08] [-1,94.60] [-1,62.38] [-1,153.68] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,6.71] [-1,98.00] [-1,3.30] [-1,94.60] [-1,1.08] [-1,94.60] [-1,8.23] [-1,99.52] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,4.08] [-1,95.37] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,5.92] [-1,97.72] [-1,1.08] [-1,94.60] [-1,1.59] [-1,94.75] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,6.04] [-1,97.33] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,6.26] [-1,97.55] [-1,3.71] [-1,95.00] [-1,3.17] [-1,94.60] [-1,3.30] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,2.87] [-1,102.09] [-1,1.08] [-1,94.60] [-1,32.33] [-1,123.63] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.10] [-1,94.60] [-1,1.08] [-1,94.60] [-1,1.08] [-1,94.60] a

:

z

a3j 38 39 32 71 80 26 5 40 8 12 30 15 0 1 23 100 0 20 3 0 40 6 8 0 6 4 22 4 6 1 5 14 18 2 8 0 20 0 0 6 12 6 80 13 6 22 14 0 1 2 550

 3j z = 0 z = 0 05  [-1,0.45] [-1,236.96] [-1,25.75] [-1,400.31] [-1,0.80] [-1,235.31] [-1,0.45] [-1,228.90] [-1,17.48] [-1,235.31] [-1,0.45] [-1,228.90] [-1,0.80] [-1,230.68] [-1,0.45] [-1,228.90] [-1,0.45] [-1,229.08] [-1,1.37] [-1,242.12] [-1,0.45] [-1, 228.90] [-1,0.80] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,1.40] [-1,288.31] [-1,4.96] [-1,234.37] [-1,0.4 [-1,228.90] [-1,2.04] [-1,230.21] [-1,0.96] [-1,231.01] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.79] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,3.80] [-1,229.71] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,7.55] [-1,245.21] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1, 228.90] [-1,0.45] [-1,228.90] [-1,0.45] [-1,228.90] a

:

z

a4j 8 71 30 60 200 18 6 30 4 8 31 6 3 0 18 60 21 4 0 2 32 15 31 2 2 7 8 2 8 0 2 8 6 7 1 0 0 20 8 14 20 2 40 6 1 14 20 12 0 1 550

 z = 0 [-1,169.47] [-1,357.06] [-1,185.34] [-1,169.47] [-1,169.47] [-1,169.47] [-1,170.98] [-1,169.47] [-1,180.18] [-1,182.98] [-1,169.02] [-1,169.02] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,293.45] [-1,185.31] [-1,173.00] [-1,171.11] [-1,169.47] [-1,169.47] [-1,169.47] [-1,182.98] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,240.09] [-1,181.35] [-1,169.47] [-1,182.80] [-1,169.47] [-1,187.20] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,170.69] [-1,169.47] [-1,182.98] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47] [-1,169.47]

a4 j

z = 0 05  [-1,263.15] [-1,243.03] [-1,265.50] [-1,263.15] [-1, 263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,264.28] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,455.44] [-1,263.15] [-1,455.44] [-1,263.15] [-1,263.15] [-1,579.42] [-1,265.62] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,526.06] [-1,263.26] [-1,263.15] [-1,263.15] [-1,263.15] [-1,271.90] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] [-1,263.15] :

z

cj 560 1125 300 620 2100 431 68 328 47 122 322 196 41 25 425 4260 416 115 82 22 631 132 420 86 42 103 215 81 91 26 49 420 316 72 71 49 108 116 90 738 1811 430 3060 215 58 296 620 418 47 81

j z = 0 z = 0 05  [-0.92,1] [-827.76,1] [-45.95,1] [-872.80,1] [-2.33,1] [-829.17,1] [0.00,1] [-826.85,1] [-66.37,1] [-893.22,1] [0.00,1] [-826.85,1] [-2.329,1] [-829.17,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [-2.81,1] [-829.65,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [-0.92,1] [-827.76,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [-4.04,1] [-831.88,1] [-14.55,1] [-841.40,1] [0.00,1] [-826.85,1] [-5.88,1] [-832.83,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [-2.33,1] [-829.17,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [-13.90,1] [-840.75,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [-15.52,1] [-842.37,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] [0.00,1] [-826.85,1] c

Table 1: Sensitivity information for the cost function and the second, third and fourth constraints.

:

z

M n X X i Sj  ( Dji )w2i ; N1 + 1  i  N

j =1 N X

j =1

Sji  Ij ; 1  j  M

i=1 w1i + w2i  1; N1 + 1  i  N 0  Sji  Dji ; 1  i  N; 1  j  M w1i ; w2i 2 f0; 1g; 1  i  N

We illustrate our procedure on an instance of the above problem, randomly generated as follows: N = 10, M = 5, N1 = 2, 1 = 0:5, 2 = 0:8, Dji = U [0; 100]1 , K i = U [0; 150], rji = U [0; 15] and Ij = U [200; 400] 8 j . There are 20 0-1 and 50 continuous variables.2

 Issue 1: PAre there anyPredundant constraints? The upper bound constraints for the partial n i i i i shipment M j =1 Sj  ( j =1 Dj )( 2 w1 + w2 ) for customers 4,5,7 and 9 are slack (i.e. are satis ed with strict inequality) for the optimum solution. Although this alone does not mean that these constraints are redundant, sensitivity analysis shows that they are not associated with any leaf node of the branch-and-bound tree and are therefore redundant.

 Issue 2: Customer 4 requires receipt of at least 50% of demand (or nothing) in the rst

shipment and in fact receives 50%. If this customer raises the 50% minimum to 65%, the P S i  (Pn Di )wi increases from 139.5 to 182. A coecient of w1i in the constraint M 1 j =1 j 1 j =1 j primal analysis (based on the optimal node only) reveals that the optimal value, now $1061, will not increase more than $1970. At most $1970 should be charged to this customer in addition to the cost of the shipment.

 Issue 3: The supplier wants to increase his pro ts by 2% and would like to estimate the

maximum increase in the inventory of product 1 that wouldbe necessary to achieve this. Choosing z = 0.02z  ,we obtain I1 = 69 units.

 Issue 4: The e ect of changing the reward K i for making an initial shipment to satisfying customer i can be analyzed. For instance, the reward for ful lling the complete demand of customer i (the coecient of w2i ), which is now K i = $1000, can be increased by $113 without

changing the optimal solution and can be increased by $1072 without increasing the optimal solution by more than 1%.

1 U[a,b] denotes the discrete uniform distribution on [a,b]. 2 The complete problem formulation is available at http://www.gsia.cmu.edu/afs/andrew/gsia/jh38/.

22

12 Cutting planes In this section, we extend our analysis to the case when valid cutting planes are added at nodes of the branch-and-bound tree. At a leaf node, the relaxed problem is min z = cx s.t. Ax  a; Dx  d; Bx  b x  0;

(32)

where Dx  d are the cutting planes added at the previous nodes and Bx  b are the branch cuts and upper bounds. Again, our approach will be to keep the proof schema (at an infeasible or feasible leaf node) xed. For the purpose of sensitivity analysis, we would like to perform only those perturbations in the data which keep the cutting planes valid. Doing this requires the reconstruction of the proof of validity of the cuts Dx  d. Below, we discuss a few classes of cutting planes.

12.1 Rank-1 Chvatal-Gomory (CG) cuts Suppose, for the moment, that cuts are added only at the root node of the search tree. These P cuts are derived directly from the problem formulation. For a rank-1 C-G cut nj=1 j xj  0 , let (w; v)  0 be the multipliers, corresponding to the constraint sets Ax  a and Bx  b respectively, used in deriving the cut. That is, j = dwAj + vBj e and 0 = dwa + vbe. The perturbed constraints P (A + A)x  a + a, with the multipliers (w; v + v)  0 imply the cut nj=1 0j xj  00 where 0j = dw(A + A)j + (v + v)Bj e and 0 = dw(a + a) + (v + v)be. This cut implies the original P cut nj=1 j xj  0 if 0j  j for each j , and 0  0 . For this, it suces that

w(A + A)j + (v + v)Bj  j w(a + a) + (v + v)b  0 1 +  v + v  0 Written in terms of the perturbations A and a in A and a respectively,

wAj + vBj  j wAj vBj wa + vb  0 wa vb 1 +  v + v  0

(33)

Note that here, in addition to holding the optimal proof schema of the inference dual xed, we also x the proof of validity of the cutting planes (i.e. the multipliers w). Thus constraints (34) 23

are required in addition to the constraints (28). The Gomory fractional cuts obtained from the original LP relaxation are a special case of the rank-1 C-G inequalities [15] and as such can be handled by the above approach. The same approach is applicable even when cuts are added at other nodes of the branch-andbound tree. However, note that in this case, the proof of validity of a cut could be much more complex.

12.2 Cover inequalities from the original constraints

P P Consider a cover inequality j 2J xj  1; J  N obtained from a constraint j 2N a~j xj  a0 of Ax  a where xj 2 f0; 1g 8j 2 N . Without loss of generality, we can assume that a~j  0 8j 2 N . P Then, a proof of validity of this inequality is the fact that j 2N nJ a~j < a0 . Similarly, for a cover P P inequality j 2J xj  2, a proof constitutes the system j 2N nJ a~j + a~k < a0 8k 2 J . In general, P for a cover inequality j 2J xj  k, a proof of validity is given by X X a~j + a~l < a0; 8L  J; jLj = k 1 j 2N nJ

l2L

During sensitivity analysis, to keep these inequalities valid, we allow only those perturbations in the data which preserve their proof of validity. Then, the perturbed inequality satis es X X (~a + ~a)j + (~a + ~a)l < (a0 + a0 ) j 2N nJ

l2L

in addition to (28).

12.3 Perturbations in the objective function The cutting planes discussed above are derived from the problem constraints and hence their proof of validity depends solely on the constraints Ax  a. Thus, perturbations in the objective function, c, do not a ect the validity of the cutting planes. Thus, as before, c is required to satisfy (29). Note that in the special case when we perturb only the objective function coecients and keep all other data xed, (29) is the only system c is required to satisfy even if valid cutting planes are added at any node of the branch-and-bound tree.

13 Conclusion and Extensions In this paper, we have proposed a new, inference-based, method of sensitivity analysis for mixed integer/linear programming problems. It has been shown that perturbations in the problem data 24

which keep the optimal value unchanged or change it by some prespeci ed amount can be obtained by optimizing over a system of linear inequalities. The method yields useful sensitivity information on two realistic examples. Future work is in three directions. (i) Investigate whether it is useful to exploit any redundancy of information at leaf nodes, so as to reduce the amount of information to be stored. It is possible that any perturbation for which the surrogate at leaf node n1 implies the violated clause at n1 is one for which the surrogate at node n2 implies the violated clause at n2 . If so, the information at node n2 is redundant and need not be stored. (ii) Investigate whether constructing a resolution proof from the strongest falsi ed clauses implied by violated inequalities at leaf nodes, rather than the weakest as done here, simpli es the analysis. The former may allow one to deduce the empty clause at a node below the root node, so that only the subtree rooted at this node need be analyzed for purposes of sensitivity analysis. (iii) Extend sensitivity analysis to a wider class of cutting planes used within branch-and-cut.

References [1] Blair, C. E., Integer and mixed integer programming, in T. Gal H. J. Greenberg, eds., Recent advances in sensitivity analysis and parametric programming, Kluwer, to appear. [2] Blair, C. E., and R. G. Jeroslow, The value function of a mixed integer program: I, Discrete Applied Mathematics 19 (1977) 121-138. [3] Blair, C. E., and R. G. Jeroslow, The value function of a mixed integer program: II, Discrete Applied Mathematics 25 (1977) 7-19. [4] Blair, C. E., and R. G. Jeroslow, The value function of an integer program, Mathematical Programming 23 (1982) 237-273. [5] Blair, C. E., and R. G. Jeroslow, Constructive characterizations of the value function of a mixed-integer program I, Discrete Applied Mathematics 9 (1984) 217-233. [6] Blair, C. E., and R. G. Jeroslow, Constructive characterizations of the value function of a mixed-integer program II, Discrete Applied Mathematics 10 (1985) 227-240. [7] Cook, W., A. M. H. Gerards, A. Schrijver and E. Tardos, Sensitivity theorems in integer linear programming, Mathematical Programming 34 (1986) 251-264. [8] Dawande M., Gavirneni S. and Tayur S. E ective Heuristics for Multi-product Shipment Models, Technical Report TR-96-05, GSIA, Carnegie Mellon University, 1996. Submitted for publication. 25

[9] Hooker, J. N., A quantitative approach to logical inference, Decision Support Systems 4 (1988) 45-69. [10] Hooker, J. N., Generalized resolution for 0-1 linear inequalities, Annals of Mathematics and Arti cial Intelligence 6 (1992) 271-286. [11] Hooker, J. N., Logic-based methods for optimization, in A. Borning, ed., Principles and Practice of Constraint Programming, Lecture Notes in Computer Science 874 (1994) 336-349. [12] Hooker, J. N., Logic-based Benders decomposition http://www.gsia.cmu.edu/afs/andrew/afs/jh38/jnh.html.

(1996).

Available

on

[13] Hooker, J. N., Inference duality as a basis for sensitivity analysis, in E. C. Freuder, ed., Principles and Practice of Constraint Programming{CP96, Lecture Notes in Computer Science 1118, Springer (1996) 224-236. Also to appear in Constraints. [14] Hooker, J. N., and M. A. Osorio, Mixed logical/linear programming (1996). To appear in Discrete Applied Mathematics. [15] Nemhauser, G.L. and Wolsey, L.A. (1988), Integer Programming and Combinatorial Optimization, Wiley, N.Y. [16] Peterson, C. C. (1965), Selection of new research and developmentopportunities in light of budget constraints, Master's Thesis, Arizona State University (1965). [17] Peterson, C. C. (1967) Computational experience with variants of the Balas algorithm applied to the selection of R & D projects, Management Science 13 (1967) 736-750 [18] Quine, W. V., The problem of simplifying truth functions, American Mathematical Monthly 59 (1952) 521-531. [19] Quine, W. V., A way to simplify truth functions, American Mathematical Monthly 62 (1955) 627-631. [20] Schrage, L., and L. Wolsey, Sensitivity analysis for branch and bound integer programming, Operations Research 33 (1985) 1008-1023. [21] Skorin-Kapov, J., and F. Granot, Nonlinear integer programming: Sensitivity analysis for branch and bound, Operations Research Letters 6 (1987) 269-274. [22] Tind, J. and L. A. Wolsey, An elementary survey of general duality theory in mathematical programming, Mathematical Programming 21 (1981) 241-261. [23] Tsang, E., Foundations of Constraint Satisfaction (London, Academic Press) 1993. 26

[24] Wolsey, L. A., Integer programming duality: price functions and sensitivity analysis, Mathematical Programming 20 (1981) 173-195. [25] Wolsey, L. A., The b-hull of an integer program, Discrete Applied Mathematics 3 (1981) 193201.

27