A Constraint Generation Algorithm for Large Scale

0 downloads 0 Views 357KB Size Report
For a more in-depth introduction to constraint generation algorithms, one may ...... ferent states completely describe the feasible domain, it corresponds to some.
Mathematical Programming manuscript No. (will be inserted by the editor) Walid Ben-Ameur · Jos´e Neto

A Constraint Generation Algorithm for Large Scale Linear Programs using Multiple-Points Separation Received: date / Revised version: date Abstract. In order to solve linear programs with a large number of constraints, constraint generation techniques are often used. In these algorithms, a relaxation of the formulation containing only a subset of the constraints is first solved. Then a separation procedure is called which adds to the relaxation any inequality of the formulation that is violated by the current solution. The process is iterated until no violated inequality can be found. In this paper, we present a separation procedure that uses several points to generate violated constraints. The complexity of this separation procedure and of some related problems is studied. Also, preliminary computational results about the advantages of using multiple-points separation procedures over traditional separation procedures are given for random linear programs and survivable network design. They illustrate that, for some specific families of linear programs, multiple-points separation can be computationally effective.

Key words. constraint generation – linear programming

1. Introduction In many applications we have to solve large scale linear programs with the general structure:  max ct x (P B) s.t. x ∈ S where S ⊆ Rn is a polyhedron representing the feasible set. Also we further assume S is nonempty. These linear programs often arise from partial polyhedral descriptions of (mixed-)integer programs and from Benders’ equivalent reformulations. The set S may be described explicitly through a list of inequalities or implicitly through a separation oracle. Here, for our purposes, we will assume that the set S is nonempty and given by a strong separation oracle [17], i.e., a procedure which solves the strong separation problem whose formulation follows. Definition 1. Strong separation problem. Given a vector y ∈ Rn , decide whether y ∈ S, and if not, find a vector (a, b) ∈ Rn+1 such that at x ≤ b is a valid inequality for S that is violated by y (i.e., at y > b). GET/INT, 9 rue Charles Fourier, 91011 Evry Cedex, France e-mail: [email protected], [email protected] Mathematics Subject Classification (1991): 20E28, 20G40, 20C20

2

Walid Ben-Ameur, Jos´e Neto

Fig. 1. Description of a constraint generation algorithm Basic scheme of a constraint generation algorithm 0. Initialize the linear programming relaxation. 1. Solve the current relaxation. 2. If a nonempty set of inequalities separating the solution of the relaxation from the feasible set is found, then add a subset of these to the relaxation and return to Step 1. 3. Else, the solution of the relaxation is feasible, hence optimal. Stop.

Such descriptions through strong separation oracles can be used within a constraint generation procedure which consists in iteratively solving a relaxation of (P B) having a solution x . The separation procedure is then applied, looking for some constraint(s) violated by x . If such constraints are found, some of them are added to the current relaxation which is re-solved, and the process is iterated. Otherwise the current solution x is feasible and is therefore an optimal solution for the complete formulation. The basic scheme is reformulated in Figure 1. For a more in-depth introduction to constraint generation algorithms, one may consult, e.g., [19,26,30,33] and the references therein. To improve the efficiency of the constraint generation algorithms, it is important to carefully select the point that is being separated. In the remainder of the paper it is called the “separation point” and is denoted by xsep . One first way of generating constraints consists in adding a constraint (possibly several) violated by the optimal solution of the current relaxation. Among such methodologies we may cite Chv´atal’s [9] and Gomory’s [16] procedures for integer programs or Kelley’s [20] and Benders’ algorithms [4,14] for linear and convex problems. Another approach consists in using a point interior to the feasible set of the relaxation. For instance, the separation point may correspond to an approximate solution of the relaxation obtained by application of an interior point method [27, 29,37]. In other procedures the separation point corresponds (approximately) to a center [15,21,31,35] of the feasible set of the relaxation with respect to some measure function. Finally there are some procedures combining both approaches. Rather recently Mitchell [28] proposes to use alternatively an interior point and the simplex method to solve the relaxation. In an earlier article, Veinott [36] suggested to compute a point xbd on the boundary of the feasible set S on the line joining an interior point xin ∈ S to the optimal solution of the relaxation xout . Then the constraint generated corresponds to a supporting hyperplane for S passing through xbd . Along these lines, we investigated [2] an algorithm where xsep is defined to lie in the interval ]xin , xout ]: xsep = αxout + (1 − α)xin , where xin stands for a feasible point, xout for the solution of the relaxation and α ∈]0, 1] is a fixed parameter. The separation procedure we propose here can be classified into the latter combination approach and can be seen as a generalization of the separation

Multiple-Points Separation

3

schemes used in the aforementioned methodologies since it consists in using several points to perform separation instead of a single one. One difficulty is that it is not always possible to find an inequality which is valid for a polyhedron and violated by several exterior points. There is some closely related work arising in computational geometry and more precisely in pattern classification (or discrimination) [1,5,6,22–24,34]. A classical problem studied in this area consists in looking for a classifying (or discriminant) function f : Rn → R separating two given point sets A and B in Rn , i.e., such that f (x) > 0, ∀x ∈ A and f (x) < 0, ∀x ∈ B. This function may correspond to an affine, polynomial, ellipsoidal or more general function (see for example §8.6 in [7]). Here we are concerned with the linear separation problem (i.e., f is affine), one difference with the classical pattern classification problem being that the separation is to be performed between a point set and the polyhedron S given by a strong separation oracle. The purpose of this article is twofold: understand the complexity of different problems related to multiple-points separation and illustrate its potential use and efficiency in practice through preliminary computational experiments. The rest of the paper is organized as follows. In Section 2 we examine the complexity of multiple-points separation and propose an algorithmic framework for a multiple-points separation procedure. In Section 3 we study the complexity of decision problems involving multiple-points separation. Then, in Section 4, we report preliminary computational results obtained with a particular multiplepoints separation scheme on two kinds of problems (randomly generated linear programs and survivable network design problems), before concluding in Section 5 with final remarks. 2. Complexity and algorithmic structure of multiple-points separation The purpose of this section is to understand the difficulty of cutting multiple points and the structure of the resulting algorithms. In the remainder of this paper, X sep will denote the point set we aim at separating from the feasible set S. In Section 2.1 we study the complexity of multiple-points separation in comparison with single-point separation. Then, in Section 2.2, we introduce a general algorithmic scheme for a resolution procedure based on multiple-points separation. 2.1. Complexity of multiple-points separation When performing separation on a single point it is sufficient to consider the following standard representation of the feasible set S: S = {x ∈ Rn | Cx = d, Gx ≤ h} where the solution set of the linear equations Cx = d is the affine hull of S and each of the inequalities in the system Gx ≤ h defines a facet of S. If among these equations and inequalities none are violated by a given point x ∈ Rn we can conclude that x ∈ S.

4

Walid Ben-Ameur, Jos´e Neto

Consider now the case of a separation to be performed on the set of points {x1 , x2 } as illustrated in Figure 2. Note that the points x1 and x2 do not belong

 



 







Fig. 2. Using the polar to generate a cut separating several points

to the feasible set S since they violate constraints C1 and C2 respectively. Observe also that neither of the constraints C1 and C2 separates both x1 and x2 from S. However, the constraint C  (dotted line), obtained as a positive combination of constraints C1 and C2 is valid for S and separates both points x1 and x2 . This illustrates a need to look at the polar if we want to separate multiple points from a given set S. Obviously, even when we look at the polar, it might not be possible to generate valid inequalities for S that separate all the points in X sep . This is the case (to be discussed later) when conv(X sep ) ∩ S is not empty. From the discussion above it may seem that it is much more difficult to separate multiple points instead of a single one. In fact, under some assumptions, we shall see that both separation schemes (i.e., single-point and multiple-points) are polynomially equivalent, a result which follows from the work of Gr¨ otschel et al. [17] (see Theorem 4.7.1). We will give a proof hereafter for the sake of completeness. We start giving a formulation of the strong multiple-points separation problem to be considered. Definition 2. Strong multiple-points separation problem [MSEP]. Given a point set X sep = {x1 , ..., xp }, either (i) find a vector in conv(X sep ) ∩ S, or (ii) find an inequality that is valid for S and violated by all the points in the set X sep . We further make the following assumptions. Assumption 1. S is a polyhedron in Rn with facet-complexity at most φ, i.e., there exists a system of inequalities with rational coefficients that has solution set S and such that the encoding length of each inequality is at most φ. Let S = {x ∈ Rn | Ax ≤ b}, with A ∈ Rm×n , b ∈ Rm stand for such a representation.

Multiple-Points Separation

5

Assumption 2. Any point in the set X sep has an encoding length bounded above by a polynomial in n and φ. Also p = |X sep | is bounded by a polynomial in n and φ. In order to find a constraint that separates all the points in X sep for the case conv(X sep ) ∩ S = ∅, we introduce the following linear program whose feasible set consists of all inequalities valid for S with a violation greater than or equal to  > 0 for each point in X sep : ⎧ 0 ⎨ Min  m l (P SEP ) s.t. i=1 λi (ai x − bi ) ≥ , ∀l ∈ {1, ..., p} (ul ) ⎩ ∀i ∈ {1, ..., m} λi ≥ 0, where ai denotes the ith row of the matrix A introduced in Assumption 1. Consider now the following slight modification of the dual of (P SEP ) denoted (M DSEP ), obtained by scaling the objective function with 1 and adding a constraint bounding the objective value: ⎧ ∗ p zMDSEP = Max l=1 ul ⎪ ⎪ ⎨ p s.t. l=1 ul ≤ 1, (M DSEP ) p l ⎪ l=1 ul (ai x − bi ) ≤ 0, ∀i ∈ {1, ..., m} (λi ) ⎪ ⎩ ul ≥ 0, ∀l ∈ {1, ..., p} This modified dual (M DSEP ) almost corresponds (not exactly) to finding a convex combination of the points to separate that belong to S. We first show the following simple preliminary result. ∗ = Proposition 1. There exists a cut separating all the points in X sep iff zMDSEP 0.

Proof. Duality theory leads to the equivalence between the fact that on the one hand the feasible set of problem (P SEP ) is nonempty and on the other hand ∗ ∗ that zMDSEP = 0. In the latter case (zMDSEP = 0), the primal optimal values ∗ λi give a feasible vector of (P SEP ) (for any  > 0, up to multiplication by a positive scalar) i.e., a constraint separating all the points in the set X sep .

What follows from the interpretation for the dual problem (M DSEP ) given above is the following counterpoint to Proposition 1. Proposition 2. If there are no cuts separating simultaneously all the points of X sep , then conv(X sep ) ∩ S = ∅. Proof. If there does not exist a cut separating all the points of X sep , then it ∗ Let u∗ stand for an optimal follows from Proposition 1 that zMDSEP p = 1. ∗ l solution of (M DSEP ). Then the point l=1 ul x lies in conv(X sep ) ∩ S.

From this we get that problem [MSEP] can be reduced to the resolution of (M DSEP ). This result is used to establish the following proposition concerning the complexity of multiple-points separation. Note that, in what follows, when referring to polynomiality it is in n, φ and the oracle time (thus independent of m used in the expanded formulation introduced above).

6

Walid Ben-Ameur, Jos´e Neto

Proposition 3. Under Assumptions 1 and 2, the strong multiple-points separation problem [MSEP] can be solved in oracle-polynomial time for S given by a strong (single-point) separation oracle. Proof. Assume S is given by a strong single-point separation oracle. Let FMDSEP denote the feasible set of problem (M DSEP ) introduced above. From now on, we work with an implicit definition of FMDSEP through a strong separation oracle. The latter can be derived from the oracle for S. To be more precise, if the separation oracle is called with input u = (0, ..., 0) it returns that u ∈ FMDSEP . Else, for a given u∈ Rp+ \{0},the known strong separation oracle over S is called with the vector pl=1 ul xl / pl=1 ul . If an inequality is returned by the latter oracle, it can be used to separate u, else u ∈ FMDSEP . This separation oracle can be used to solve (M DSEP ) by traditional constraint generation. Then, it follows from the polynomial equivalence between optimization and separation established by Gr¨otschel et al. [17], that (M DSEP ) can be solved in oracle∗ = 0 or polynomial time. From Proposition 1 we have two cases: either zMDSEP ∗ ∗ zMDSEP = 1. Let u stand for an optimal solution. ∗ = 1, then from Proposition 1 we know that there exists no valid If zMDSEP inequality that cuts off all the points in X sep . ∗ = 0. Note that FMDSEP has facet-complexity Consider now the case zMDSEP bounded above by a polynomial in n and φ. This follows from Assumptions 1 and 2, the fact that a representation of FMDSEP can be derived from a formulation of S whose inequalities have encoding length bounded by φ, and the following property: xt y ≤ x + y , ∀x, y ∈ Rn , where x denotes the encoding length of x. Define now a basic optimum standard dual solution to be an optimum dual solution corresponding to a standard representation of the primal feasible set. Since FMDSEP is given by a strong separation algorithm, it follows that a basic optimum standard dual solution (λ∗1 , ..., λ∗N ) can be found in oraclepolynomial time (see Theorem 6.5.14 in [17]), together with the corresponding constraints (g 1 , h1 ), ..., (g N , hN ) arising in a standard formulation of FMDSEP . For our purposes we consider that equality constraints g t x = h are represented by two inequalities: (g, h) and (−g, −h), so that the dual values λ∗ are nonnegative. Note that the dual solution λ∗ satisfies the following relation denoted N found ∗ ∗ i i i i by (R ): (1, ..., 1, 0) = i=1 λi (g1 , ..., gp , h ). In fact we namely have h = 0, ∀i ∈ {1, ..., N }, since for that case FMDSEP = {0}. Furthermore, given a valid inequality (g, h) ∈ Rp+1 for FMDSEP , a valid inequality for S from which (g, h) is derived, can be computed in oracle-polynomial time. Consider for this the polyhedron Q = {(π, π0 ) ∈ Rn+1 | (π, π0 ) is valid for S ; gj ≤ πxj − π0 , ∀j ∈ {1, ..., p}}. Since (g, h) is valid for FMDSEP , it is equivalent to or dominated by a constraint derived from a linear combination of constraints valid for FMDSEP , i.e., there exist nonnegative coefficients (γl )pl=1 (for nonnegativity contraints), i and (βi )m , 0) derivedfrom inequalities valid for S) such i=1 (for constraints (s p m that (g, h) is dominated by (− l=1 γl el  + i=1 βi si , 0), where ei stands for the m th i unit vector. The latter implies gj ≤ i=1 βi (si )j , ∀j ∈ {1, ..., p} (in case all the coefficients β are equal to zero we may consider the valid inequality 0x ≤ 0 for S so that we may assume that there always exists at least one positive coefficient

Multiple-Points Separation

7

β). It follows that Q is nonempty, and from Assumptions 1 and 2, Q has facetcomplexity bounded above by a polynomial in n and φ. Also, a strong separation algorithm for Q can easily be derived from a strong violation algorithm for S. Then, since the latter can be solved in oracle-polynomial time (by Theorem 6.4.9 in [17]) it follows that a point in Q can be computed in oracle-polynomial time (by Theorem 6.4.1 in [17]). Hence for any (g i , hi ), i = 1, ..., N , a constraint (π i , π0i ) that is valid for S and from which (g i , hi ) is derived can be computed in oracle-polynomial time. Finally, it follows from the relation (R∗ ) that the N N inequality i=1 λ∗i π i x ≤ i=1 λ∗i π0i , that is valid for S, separates all the points in X sep (since it corresponds to a feasible solution for (P SEP )=1 ).



2.2. Generic multiple-points separation procedure The objective in this section is to introduce a generic algorithmic scheme for multiple-points separation. For this, we have to consider the two possible outputs of (M DSEP ), i.e., either conv(X sep ) ∩ S is empty or not. 2.2.1. Case conv(X sep ) ∩ S = ∅. In that case, (by the linear discrimination alternative) there exists some hyperplane separating all the points in X sep from S and hence (P SEP ) is feasible (for any  > 0). This case (considered in Proposition 1) is illustrated in Figure 3(a).































(a) Fig. 3. Illustration of the cases: (a)











(b) conv(X sep )

∩ S = ∅ and (b) conv(X sep ) ∩ S = ∅

2.2.2. Case conv(X sep ) ∩ S = ∅. When separation is not possible for the set X sep (as it is illustrated in Figure 3(b)), we should decrease its size. Assuming X sep contains an optimal solution x∗ for the current relaxation we know that if it reduces to the set X sep = {x∗ }, we will be able to perform separation (unless x∗ is optimal). Moreover when conv(X sep ) ∩ S = ∅, following the proof of Proposition 2, we can actually obtain a feasible solution to the problem that can be used to get a primal bound for the optimization problem.

8

Walid Ben-Ameur, Jos´e Neto

Another way to proceed instead of reducing X sep , is to look for valid constraints separating subsets of X sep . For instance, assuming all the points in X sep lie outside the feasible set, we will see in Section 3 that it is possible to decompose X sep into subsets Y j (not necessarily disjoint) satisfying: – S ∩ conv(Y j ) = ∅, – S ∩ conv(Y j ∪ {z}) = ∅, ∀z ∈ (X − Y j ). This decomposition procedure is illustrated on Figure 4.











































Fig. 4. Using multiple sets of exterior points to generate several cuts

2.2.3. Algorithmic scheme. Let x∗ stand for an optimal solution of the current relaxation of problem (P B) introduced in Section 1. At this point we do not give a precise definition for the points of X sep so as to keep the proposed algorithm general ; however we shall assume throughout this paper that x∗ always belongs to X sep . The overall resolution scheme only differentiates from classical constraint generation procedures at the separation stage. Here, assuming the set of points X sep is given, the separation procedure looks for a constraint violated by all of them. On the one hand, if it fails to find one (i.e., conv(X sep )∩S = ∅), then there are two situations. First, if |X sep | > 1 some points different from x∗ are removed from X sep and the separation procedure is called again with the new subset of points. Second, if the separation fails with X sep = {x∗ }, then x∗ is an optimal solution of the problem and the process can be terminated. On the other hand, if the separation procedure succeeds in finding (at least) one inequality separating all the points in X sep , it is added to the current relaxation. The relaxation is then re-solved, and the whole process is iterated. A pseudo-code description is given in Figure 5.

Multiple-Points Separation

9

Fig. 5. Description of a procedure for multiple-points separation Generic multiple-points separation algorithm (GMPS) 1. Solve the relaxation → x∗ . 2. Update X sep (x∗ ∈ X sep ). 3. While a violated constraint is not found and optimality is not achieved, do: Find a constraint separating X sep from S; If a constraint has been found then - Add to relaxation a violated constraint and go to step 1; Else if |X sep | > 1 then - Remove from X sep a subset of points Q with x∗ ∈ / Q, X sep := X sep \ Q; Else - x∗ is an optimal feasible solution; End if; End while;

Remark 1. Observe that if we do not include x∗ in the set X sep , then it is not possible to guarantee that the solution of the LP relaxation will change after constraints are added. However we can design other schemes in which x∗ does not belong to X sep initially, but might be added to it if separation fails. Remark 2. As mentioned earlier in Section 2.2.2, the case when the multiplepoints separation procedure fails to find an inequality separating all the points in X sep can be handled differently, by looking for constraints separating different point subsets. Two main features of the procedure described above need to be specified to obtain a practical implementation. First a definition of the points lying in the set X sep must be given. Then, in case the separation fails to generate a constraint, a procedure to remove points from X sep has to be specified. Before dealing with a particular implementation, we come to introduce several results concerning the complexity of some problems involving multiple-points separation. 3. Complexity of some problems relating to multiple-points separation The problems considered in this section deal with the way the point set X sep may be processed in step 3 of the procedure GM P S introduced in Section 2.2.3. In particular, we establish the NP-completeness of several problems that find particular subsets of X sep (to be defined next) with maximum or minimum cardinality. In the remainder of this section we assume that the multiple-points separation can be performed in oracle-polynomial time. We first introduce the following definitions. Definition 3. A point set X = {x1 , ..., xp } is said to be separable if there exists a constraint at x ≤ b separating all the points in X from the feasible set S, i.e., at xi > b, ∀i ∈ {1, ..., p}. Otherwise it is said to be non-separable.

10

Walid Ben-Ameur, Jos´e Neto

Definition 4. Given a non-separable point set X = {x1 , ..., xp }, a subset Y ⊆ X is said to be a maximal separable set if Y is separable and for all x ∈ (X \ Y ), Y ∪ {x} is non-separable. Definition 5. Given some point set X = {x1 , ..., xp }, a subset Y ⊆ X is said to be minimal non-separable if |Y | ≥ 2, Y is non-separable and for all x ∈ Y, Y \ {x} is separable. In Section 3.1, we namely study the complexity of the problems which consist in finding a maximal separable set with maximum or minimum cardinality and in partitioning a point set into separable subsets. In Section 3.2, we perform a similar study for non-separable sets.

3.1. Complexity results on separable sets 3.1.1. Maximal separable set with maximum cardinality. Within the framework of a constraint generation procedure, given a non-separable point set X sep , a possible criterion to evaluate the cut generated could be the number of points in X sep that this cut separates. Then we may be interested in looking for a constraint separating a point subset Y ⊆ X sep with largest cardinality. We show this problem is NP-hard by proving that the following decision problem is NPcomplete: [SEP MAX] Instance: a set of p points in Rn : X sep = {x1 , ..., xp }, an integer K ≤ p. Question: does there exist a separable set Y ⊆ X sep satisfying |Y | ≥ K ? The proof of NP-completeness given here is by reduction from the independent set problem. An independent set in a graph G with vertex set V and edge set E is a set V  ⊆ V , such that no two vertices in V  are joined by an edge in E. The independent set problem is: Instance: an undirected graph G = (V, E), a positive integer K ≤ |V |. Question: does G contain an independent set S ⊆ V satisfying |S| ≥ K ? To convert an instance of the independent set problem into one of [SEP MAX] we proceed as follows. We set n = |V | and define: – for each vertex vi ∈ V , the vector ei ∈ Rn , – for each edge (vi , vj ) ∈ E the vector yij = 12 (ei + ej ). By defining the set of separation points X sep = {e1 , ..., en } (where ei stands for the ith unit vector in Rn ), and the polytope P = conv(vi ,vj )∈E {yij }, we have a polynomial time transformation from an instance of the independent set problem into one of [SEP MAX]. Furthermore we have the following property: Lemma 1. A set of vertices W ⊆ V is an independent set in G iff the point set Y = {(ei )vi ∈W } is separable.

Multiple-Points Separation

11

Proof. [⇒] First we prove that if W is an independent set in G then the inequality  1 point set Y from P . i:vi ∈W xi ≤ 2 is valid for P and separates the  Consider the optimization problem: {max i:vi ∈W xi | x ∈ P }, whose optimal objective value is attained at some extreme point of P , i.e., there exists an edge (vi , vj ) ∈ E such that an optimal solution x∗ can be written x∗ = yij . Also by hypothesis W is an independent set, thus |{vi , vj } ∩ W | ≤ 1. It follows that the inequality mentioned above is valid for P . Finally, note that all the separation points in Y violate the inequality stated above and so the first implication of the proposition follows. [⇐] Let Z ⊆ X sep be a point set separable from P , i.e., Z = {(ei )i∈I } where I corresponds to the set of indices of the points in X sep belonging to Z. The set of vertices W ⊆ V in correspondence with Z is defined by: W = {(vi )i∈I }. Assume by contradiction that W is not an independent set. This implies the existence of an edge (vi , vj ) ∈ E, with (vi , vj ) ∈ W × W . Yet, by the construction of P , the midpoint yij of segment [ei , ej ] is in P . So no cut exists that separates the points ei and ej at the same time, which contradicts the assumptions given. It follows that W is an independent set in G.

Note that from Lemma 1 the point set Y (as defined above) is maximal separable iff W is an independent dominating set (i.e., W is an independent set such that any vertex v ∈ V \ W is adjacent to at least one vertex in W ). Now, using Lemma 1, the following proposition can be easily obtained by reduction from the independent set problem, which is NP-complete [13]. Proposition 4. Problem [SEP MAX] is NP-complete.



3.1.2. Maximal separable set with minimum cardinality. Depending on the separation procedure, handling many points may induce substantial additional computational work. Therefore, it might be interesting to generate a “good cut” in the sense that it would separate a maximal separable set with the additional restriction that the set would be of minimum cardinality. Concerning the associated decision problem denoted [SEP MIN] - whose formulation only differs from [SEP MAX] in the “question” by changing “|Y | ≥ K” to “|Y | ≤ K”- the following proposition can be shown (analogously to Proposition 4, by reduction from the minimum independent dominating set problem [13]). Proposition 5. Problem [SEP MIN] is NP-complete.



3.1.3. Minimum-cardinality partition of the separable sets. Assume that we are given a set X sep of points exterior to the feasible set that is used to generate cuts and that we want to add sufficiently many cuts to separate all of the points of X sep . In order to keep a concise relaxation of the whole problem, it may be desirable to generate a number of cuts as small as possible. The following proposition (whose proof is omitted here for conciseness) shows that this problem is equivalent to partitioning X sep into a minimum number of separable subsets.

12

Walid Ben-Ameur, Jos´e Neto

Proposition 6. The minimum cardinality of a partition of X sep into separable subsets is equal to the minimum number of cuts separating all the points of X sep .

By using a construction analogous to the one introduced in Section 3.1.1, the decision problem of partitioning the set of vertices of a graph into k independent sets (so-called “graph k-colourability” or “chromatic number” problem) can be polynomially reduced to the problem, noted [SEP PART], of partitioning a set of infeasible points into k-separable sets. Because the “graph k-colourability” decision problem is NP-complete, see [13], we get the following proposition. Proposition 7. Problem [SEP PART] is NP-complete.



This result is related to the k-linear separability problem studied in the field of computational geometry, which basically consists in recognizing whether two point sets in Rn can be separated with k hyperplanes. In the particular case (different from our problem above because of varying k) when k is fixed, Megiddo [25] showed that for n = 2 this problem is NP-complete.

3.2. Complexity results on non-separable sets In this section, we deal with problems relating to minimal non-separable sets. In addition to the fact that these sets provide a primal bound (as mentioned in Section 2.2.2), each point lying in such a set can be seen as supplying nonredundant information, in the sense that the points in a minimal non-separable set are necessarily affinely independent [3] (a property that can be easily derived from Carath´eodory’s theorem). 3.2.1. Minimal non-separable set with minimum cardinality. Here we are concerned with the complexity study of the following decision problem: [NON SEP MIN] Instance: a set of p points X sep = {x1 , ..., xp } in Rn , a positive integer K ≤ p. Question: does there exist a minimal non-separable set Y ⊆ X sep with |Y | ≤ K ? The proof of NP-completeness considered here is by reduction from the minimum cardinality dominating set in a graph: Instance: an undirected graph G = (V, E), a positive integer K ≤ |V |. Question: does there exist a dominating set D ⊆ V for G satisfying |D| ≤ K ? In what follows we assume the graph G is not complete and we will refer to “maximal non-dominating set” to denote a non-dominating set W ⊆ V (i.e., there exists a vertex v ∈ V not adjacent to any vertex in W ) such that for each vertex v ∈ V \ W , W ∪ {v} is a dominating set. We first mention the following preliminary result (whose simple proof is omitted here for conciseness) to be used later.

Multiple-Points Separation

13

Lemma 2. Every maximal non-dominating set is of the form V \ {v ∪ A(v)}, v ∈ V , where A(v) denotes the set of all vertices adjacent to v.

What follows from Lemma 2 is a polynomial time algorithm to find all the maximal non-dominating sets in a graph. In fact, it comes down to checking for each vertex v ∈ V if V \ {v ∪ A(v)} is a maximal non-dominating set (an operation that can be performed in polynomial time). Let (W1 , ..., Wl ) be the different maximal non-dominating sets issued from the procedure mentioned above applied on G. (Note that since G is not complete, n these sets exist). With each vertex  vi ∈ V we associate the vector ei ∈ R , and 1 n define the set P = {y ∈ R | i:vi ∈Wj yi ≤ 1 − , ∀j ∈ {1, ..., l}}, with  = n . Note that P is nonempty since it contains the null vector. Then we have the following equivalence. Lemma 3. X ⊆ V is a dominating set iff the set of points Y = {ei : vi ∈ X} is non-separable. Proof. [⇒] Let X = {v1 , ..., vk } be a dominating set for G. We define the vector z ∈ conv(Y ) by: z = |Y1 | ei ∈Y ei . For each maximal non-dominating set Wq ,  X contains at least one vertex in V \ Wq . It follows that: j:vj ∈Wq zj ≤ k−1 k ≤ 1 − , ∀q ∈ {1, ..., l}. So z ∈ P , and Y is not separable from P . [⇐] Let Y = {ei : vi ∈ X} be some non-separable set. Now assume X is a non-dominating set. Then there exists W ⊇ X such that W is a maximal nondominating set. Considering the inequality relating to W in the definition of P ,  ∈ Y : (ek )i = 1. This way the inequality we have for every vector e k i:v ∈W i  i:vi ∈W xi ≤ 1 −  that is valid for P separates all the points in Y : a contradiction.

From Lemma 3 it follows that the minimum cardinality dominating set problem (which is NP-complete [13]) is polynomially reducible to [NON SEP MIN]. This leads to the following proposition. Proposition 8. Problem [NON SEP MIN] is NP-complete.



3.2.2. Minimal non-separable set with maximum cardinality. Analogously to Proposition 8 we can show the following result as regards the complexity of problem [NON SEP MAX]: the “maximization” counterpart of problem [NON SEP MIN] (by reduction from the problem that consists in finding a maximum-cardinality minimal dominating set in a graph [8]). Proposition 9. Problem [NON SEP MAX] is NP-complete.



3.2.3. Partitioning into non-separable sets. Partitioning a set of points into non-separable subsets when solving an optimization problem could be used to get several primal bounds. The corresponding decision problem can be stated as follows.

14

Walid Ben-Ameur, Jos´e Neto

[NON SEP PART] Instance: a set of p points X sep = {x1 , ..., xp }, a positive integer K ≤ p. Question: is there a partition of X sep into k ≥ K non-separable subsets ? By using the transformation introduced in Section 3.2.1 and by reduction from the domatic number problem (which is NP-complete [13]) we get the following proposition (see [3] for further details). Proposition 10. Problem [NON SEP PART] is NP-complete.



4. Computational experiments Consider a linear program of the following form: ⎧ ⎨ max ct x (P ) s.t. Ax ≤ b ⎩ x ∈ Rn where the feasible set S = {x ∈ Rn | Ax ≤ b} is assumed to be bounded, and A ∈ Rm×n , b ∈ Rm . Because m  n, we will solve (P ) using a constraint generation algorithm. In this section we first describe how a constraint generation algorithm using a multiple-points separation procedure can be implemented. We do so as mentioned in Section 2.2.3 by describing how the multiple-points separation is performed and by specifying the way the set of points used for separation is updated in the GM P S scheme. We present numerical results obtained using this implementation on random linear programs and survivable network design problems.

4.1. Implementation of a multiple-points separation procedure To find an inequality of the form πx ≤ π0 , valid for S and cutting all the points of a given point set X sep = {x1 , ..., xp }, we can solve the linear program: ⎧ ∗  = max  ⎪ ⎪ ⎪ ⎪ s.t. πxl ≥π0 + , ∀l ∈ {1, ..., p} (1) (−ul ) ⎪ ⎪ ⎨ m (2) (γj ) πj − i=1 αi aij = 0, ∀j ∈ {1, ..., n} (M SEP ) m π − α b = 0 (3) (γ0 ) ⎪ 0 i i i=1 ⎪ m ⎪ ⎪ (4) (δ) ⎪ i=1 αi = 1 ⎪ ⎩  ∈ R, (π, π0 ) ∈ Rn+1 , α ∈ Rm + where the terms aij represent the coefficients of the matrix A. Constraints (1) ensure that the points are strictly separated from the set S iff ∗ > 0. Constraints (2)−(4) require that the inequality generated is obtained by a convex combination of the inequalities of (P ). Because there are many constraints in the description of (P ), the number of columns of (M SEP ) is large.

Multiple-Points Separation

15

After the resolution of some of its restriction (w.r.t. the columns) supplying a vector of dual optimal values (−u∗ , γ ∗ , δ ∗ ), the reduced costs of columns not represented in the restriction (i.e., rows of A)can be computed. The reduced n cost of the q-th row of A writes: cq = −δ + j=1 aqj γj + γ0 bq . We have two cases: – Case 1. If there exists at least one row q with reduced cost cq > 0, one with maximum reduced cost is introduced in the current restriction of (M SEP ) (breaking ties arbitrarily), which is solved again. – Case 2. If all the rows of A have a nonpositive reduced cost, the solution of the current restriction is optimal for the complete formulation. We then have two subcases: – Case 2.1. ∗ > 0: we get a cut separating all points considered from S. – Case 2.2. ∗ ≤ 0: no cut separates all the points of X sep from S. It implies: conv(X sep ) ∩ S = ∅. This procedure may suggest some issues relating to convergence: if the optimal solution of (M SEP ) is selected to separate the point set X, it may induce the insertion of numerous redundant constraints into the relaxation (many of them corresponding to linear convex combinations of some subset of rows of the matrix A). A simple way to overcome this difficulty consists in inserting into the relaxation all the (non previously generated) constraints corresponding to some positive coefficient αi > 0 in the optimal solution. This is how constraints are generated in both applications to be introduced later. Also note that in our applications (M SEP ) will be solved to optimality (while it would be possible to stop its resolution as soon as a positive objective value is obtained). The case ∗ ≤ 0 can be used to update a primal bound. The following proposition shows how to compute a feasible solution in that case. p l Proposition 11. If ∗ ≤ 0 then the vector l=1 ul x is a feasible solution of problem (P ). Proof. Assume vector (−ul )pl=1 corresponds to the dual optimal values relating to (1). The reduced cost of some vector (ai , bi ) (corresponding to one constraint of the complete formulation) is then given by: c(ai ,bi ) = −∗ +

p l=1 p

n ul ( j=1 aij ((xl )j ) − bi ) ≤ 0. u xl

l It follows that ai ( l=1 ) ≤ bi for every index i. Since (ul )pl=1 satisfies the p l=1 ul p

constraint l=1 ul = 1, which is valid for the dual, the proposition follows.

We now describe how the point set X sep to be separated is updated at each iteration. The procedure denoted l-RELAX consists in defining X sep as the set of the last l optimal solutions of the relaxations. Using such a particular definition for X sep , the following result (which follows from Proposition 11) shows that the case ∗ ≤ 0 directly leads to an optimal solution of problem (P ) (so that we do not need here to reduce X sep to perform separation as it was mentioned in Section 2.2.3).

16

Walid Ben-Ameur, Jos´e Neto

Proposition 12. Within the procedure l-RELAX, when no cut separating p X is found, i.e., ∗ ≤ 0, an optimal feasible solution of (P ) is given by l=1 ul xl , where (−ul )pl=1 stands for the dual optimal values of (M SEP ) as defined in Section 4.1.



4.2. Random linear programs We first perform a computational experiment on randomly generated linear programs. After a brief description of how data have been generated, we report computational results obtained on some instances. 4.2.1. Data description. form:

The linear programs generated have the following ⎧ ⎨ min ct x s.t. Ax ≥ 1 ⎩ x ∈ Rn+

where 1 stands for the all-ones vector in dimension m. The coefficients of the matrix A are generated as follows. For each constraint i, i ∈ {1, ..., m}, n scalars These coefficients aij , j ∈ {1, ..., n} are uniformly generated in the interval [-1,1]. n are then multiplied by a factor so that the inequality j=1 aij ≥ 1 holds. (The latter insures that the problem generated has a nonempty feasible set, since it contains the all-ones vector). The resulting LPs are completely dense and do not have a particular structure. 4.2.2. Computational results. In Table 1, we present the results of our computational experiments. The columns of Table 1 have the following meaning. – Data: name of the instance under the form “(number of variables) × (number of constraints)”. – Procedure: name of the separation procedure used. – Iterations: number of iterations on solving the master problem (relaxation of the whole program). (In the last iteration no cut is generated). – Constraints: total number of constraints in the last relaxation giving an optimal solution of the problem. – Separation: CPU time (in seconds) spent on the separation procedure (i.e., on the resolution of M SEP ). – LP: CPU time (in seconds) spent on the resolution of the master program. – Total: total CPU time (in seconds). Computations have been performed on a PC with a Pentium III processor running at 660MHz, and using CPLEX 9.0 [10]. The initial relaxation only comprises the nonnegativity constraints on the variables (these are not included in the field “Constraints” of the tables). So as to compare the efficiency of a multiple-points separation with a classical single-point separation scheme we implemented a procedure called CL M OST which generates at each iteration

Multiple-Points Separation

17 Table 1. Results on random LPs

Data

Procedure

Iterations

Constraints

Separation

LP

Total

100 × 10000

CL MOST 2-RELAX 10-RELAX 20-RELAX 50-RELAX

221 153 74 71 83

220 214 198 230 221

15.76 27.34 22.00 26.08 34.44

5.54 3.82 1.85 2.25 2.22

21.30 31.16 23.85 28.33 36.66

100 × 50000

CL MOST 2-RELAX 10-RELAX 20-RELAX 50-RELAX

198 147 75 69 82

197 196 178 189 199

70.79 124.25 93.87 99.31 114.76

4.32 3.48 1.97 1.90 1.88

75.11 127.73 95.84 101.21 116.64

300 × 10000

CL MOST 2-RELAX 10-RELAX 20-RELAX 50-RELAX

616 389 169 124 128

615 611 578 581 647

124.88 224.37 192.28 218.94 391.08

394.64 255.74 119.02 123.15 127.54

519.52 480.11 311.30 342.09 518.62

300 × 25000

CL MOST 2-RELAX 10-RELAX 20-RELAX 50-RELAX

622 408 190 137 134

621 616 576 554 648

320.92 552.93 441.58 433.25 665.67

387.06 362.49 158.48 128.30 138.44

707.98 915.42 600.06 561.55 804.11

600 × 5000

CL MOST 2-RELAX 10-RELAX 20-RELAX 50-RELAX

1045 629 242 184 153

1044 1043 1022 1059 1098

194.71 426.78 460.74 633.54 1271.60

4380.42 2749.10 1194.34 985.79 1079.23

4575.13 3175.88 1655.08 1619.33 2350.83

600 × 10000

CL MOST 2-RELAX 10-RELAX 20-RELAX 50-RELAX

1137 706 282 200 170

1136 1129 1124 1134 1213

445.85 854.01 834.61 1006.68 1911.92

5202.92 3438.67 1541.33 1199.30 1050.39

5648.77 4292.68 2375.94 2205.98 2962.31

an inequality with largest violation for the current optimal solution of the relaxation. First we note that the size (in terms of constraints) of the relaxations giving the optimal solution are relatively close from each other for each instance over the different separation procedures. By contrast, the total number of iterations spent on the master program varies notably depending on the number of points used in the separation. As an example, considering the instance “600×10000”, it is more than divided by six between the procedures CL M OST and 50-RELAX. We conclude that handling several points reduces the whole time spent on the master program. Nonetheless this “gain” of time is offset by an additional effort induced in the multiple-points separation. Furthermore if their cardinality becomes too large, the total computation times increase. These results suggest that there is an increasing advantage for using a multiple-points separation procedure with the size of the instances. As an illustration, computation times are more than halved on the two largest problems (by considering the procedure 20-RELAX compared with CL M OST ).

18

Walid Ben-Ameur, Jos´e Neto

4.3. Survivable network design problems We now illustrate the use of multiple-points separation on survivable network design problems. While in the case of random linear programs an “explicit” formulation of the feasible set S was known, we shall work here with an implicit description through a particular family of valid inequalities. We first describe the solution procedure used, before reporting preliminary computational experiments. 4.3.1. Formulation and algorithmic scheme for separation. We consider a communications network represented by a graph G = (V, E), where V corresponds to the set of nodes of the network and E to the set of links between nodes. Let K be a set of demands to be satisfied, each characterized by a triplet (sk , tk , dk ), where sk stands for the emitter (or source node), tk the receptor (or sink) and dk the amount of traffic to be carried for demand k. We also consider a set R of states in which the network can be after edges fail. The subset of edges of the network that are operational in state r ∈ R is denoted by Er . The normal functioning state r of the network is such that Er = E. Given unit edge-capacity costs ae , e ∈ E, we aim to compute the capacities (ce )e∈E on the links such that all demands can be satisfied in every state r ∈ R and the total cost of the network is minimized. The survivable network design problem can be formulated to be amenable to Benders’ decomposition as follows.  (SN DP )

 Minimize e∈E ae ce (SAT r (ce )) feasible , ∀r ∈ R.

Note that the variables of (SN DP ) are ce , for e ∈ E and the problem SATr (ce ) is defined as

(SAT r (ce ))

⎧ k )r ⎨ j k (fj ⎩

k∈K

(fj k )r

j k e

= dk , ∀k ∈ K, (fj k )r ≤ ce , ∀e ∈ Er , ≥ 0, ∀k ∈ K, j k ∈ Pkr ,

where Pkr stands for the set of paths between the extremities of demand k in the graph Gr = (V, Er ), and the variables (fj k )r represent the flow on path j k ∈ Pkr to satisfy demand k. (SN DP ) is solved by a constraint generation algorithm where the inequalities generated are Benders’ feasibility cuts derived from (SATr (ce )). We remind that SATr (ce ) is completely described by semi-metric inequalities [12,11], first [32]. These inequalities are of the form by Iri r[18] and Onaga  introduced r r ν c ≥ π d , where ν stands for the dual variable in (SATr (ce )) e k e e∈Er e k∈K k corresponding to the capacity constraint on link e ∈ Er and πkr stands for the dual variable associated with demand constraint k.

Multiple-Points Separation

19

They correspond (after multiplication with  1 ν r ) to elements of the semie∈E e metric polytope of state r: ⎧ r  πk ≤ e∈j k :j k ∈P r νer ⎨ k (SM Pr ) ν r = 1, ⎩ r e∈E e νe ≥ 0. Although all the semi-metric inequalities that can be obtained in the different states completely describe the feasible domain, it corresponds to some particular representation of the feasible region. So to determine whether the set conv(X sep ) ∩ S is empty or not, one has to consider all valid inequalities for S. This can be done solving the following linear program which looks for some valid inequality for S of the form πx ≥ π0 (form of semi-metric inequalities), and violated by the set of separation points X sep = {x1 , ..., xp }, which correspond to capacity vectors in this application. ⎧ max  ⎪ ⎪ ⎪ ⎪ s.t. πxl ≤ π0 − , ∀ l ∈ {1, ..., p}, (ul ) ⎪ ⎪ |R| n(i) ⎪ e ⎨ π e − i=1 j=1 αij πij = 0, ∀ e ∈ E, (γ e ) |R| n(i) (M SEP ) ⎪ π0 − i=1 j=1 αij bij = 0, (γ0 ) ⎪ ⎪ r n(i) ⎪ ⎪ (δ) ⎪ i=1 j=1 αij = 1, ⎪ ⎩ ∀ i, j αij ≥ 0, with the notation: – n(i): number of semi-metric vectors (extreme points of the semi-metric polytope) in state i, – (πij , bij ): j th extreme point of the polytope of semi-metric inequalities in state i of the network e : component e of vector πij . – πij Given a potentially huge number of extreme points relating to the different semi-metric polytopes, this problem is solved using a column generation algorithm. Starting with a restriction of the complete formulation with variables corresponding to some subset of extreme points, the following steps are applied iteratively: 1. we optimize the relaxation supplying a vector of optimal dual values (γ ∗ , γ0∗ , δ ∗ ), 2. we compute columns having positive reduced cost to be inserted in current restriction. The reduced cost of a semi-metric vector (a, b) is given by c =  −δ + e∈E ae γ e + γ0 b. We can determine, for each state r ∈ R, the most positive reduced cost of the columns associated with this state by solving the following linear program:  max aγ ∗ + γ0∗ b (SP Br ) s.t. (a, b) ∈ SM Pr .

20

Walid Ben-Ameur, Jos´e Neto Table 2. Data characteristics Data data data data data data

1 2 3 4 5

|V |

|E|

Demands

States

20 25 30 40 40

80 100 150 200 200

47 84 90 20 110

15 15 60 25 30

These problems are solved using the following constraint generation procedure. After a relaxation has been optimized (with optimal solution (ν  , π  )),  a shortest path with edge-weights ν  is computed for each demand k: Pkν . If  πk > e∈P ν  νe then constraint πk ≤ e∈P ν  νe is added to the relaxation, k k otherwise the current solution satisfies all such constraints. This process is iterated until no violated constraint is found. (Note the dependence on parameter r for the auxiliary program). To generate columns, (SP Br ) is solved for each state r ∈ R. A list L of the extreme points issued from these subproblems is updated. Solving (SP Br ) supplies an extreme point of (SM Pr ). If this vector is of positive reduced cost (computed using the dual optimal values of (M SEP )), it is compared with the points obtained in other states stored in L, and if new it is inserted in the list. After all states have been processed, all the points in L, are inserted as new columns in (M SEP ). 3. If no semi-metric vector with positive reduced cost was found over all states then (M SEP ) has been solved to optimality, stop. Else go to step 1. 4.3.2. Computational results. The instances used in our experiments are described in Table 2. Computations were performed on a PC with a Pentium III processor running at 660MHz, and using CPLEX 9.0 [10]. The results obtained are given in Table 3, where ALL-RELAX represents a separation procedure where all former optimal solutions of relaxations are used in the separation. At each iteration the procedure CL M OST generates a semi-metric inequality with largest violation for the current optimal solution of the relaxation. First note that for all instances evaluated the time spent on the separation phase is much greater than the time spent on solving the master program which contrasts with random linear programs considered above. Therefore the only relevant improvements are on the time spent on the separation process through a reduction in the total number of iterations. These experiments indicate that the usefulness of multiple-points separation procedures grows for an increasing number of states with a graph and demand set fixed. If the number of states is small then the performance of CL M OST dominates those of the procedures l-RELAX. When the number of states becomes sufficiently large then a decrease of the total solving time can be achieved with multiple-points separation procedures, by reducing the total number of iterations through the generation of several constraints at a single separation step. For instance, the number of

Multiple-Points Separation

21

Table 3. Results on survivable network design problems Data

Parameter

Iterations

Constraints

Separation

LP

Total

data 1

CL MOST 2-RELAX 10-RELAX 20-RELAX ALL-RELAX

136 96 51 41 41

135 125 133 124 125

95.15 107.70 54.92 44.33 45.37

0.26 0.15 0.08 0.06 0.06

95.41 107.85 55.00 44.39 45.43

data 2

CL MOST 2-RELAX 10-RELAX 20-RELAX ALL-RELAX

138 105 46 41 40

137 138 142 154 152

269.75 427.65 217.79 222.77 248.70

0.15 0.18 0.12 0.07 0.02

269.90 427.83 217.91 222.84 248.72

data 3

CL MOST 2-RELAX 10-RELAX 20-RELAX 50-RELAX ALL-RELAX

300 179 74 66 67 67

299 279 260 236 250 250

7228.83 14419.10 5328.83 4126.47 4096.61 4090.34

1.80 1.03 0.46 0.41 0.43 0.42

7230.63 14420.13 5329.29 4126.88 4097.04 4090.76

data 4

CL MOST 2-RELAX 10-RELAX 20-RELAX ALL-RELAX

160 139 59 54 51

159 189 180 175 184

2617.40 4739.24 1884.50 1212.13 1194.69

0.42 0.45 0.18 0.15 0.19

2617.82 4739.69 1884.68 1212.28 1194.88

data 5

CL MOST 2-RELAX 10-RELAX 20-RELAX 50-RELAX ALL-RELAX

669 406 132 105 103 101

668 665 597 574 569 589

19386.40 45020.00 13118.30 9532.89 9214.56 9353.09

20.18 14.31 5.20 4.46 4.06 4.16

19406.58 45034.31 13123.50 9537.35 9218.62 9357.25

iterations is more than divided by six between the procedures CL M OST and ALL-RELAX on the largest instance “data 5”.

5. Conclusion We have introduced a generalization of the classical scheme of constraint generation algorithms which basically consists in separating several points from the feasible set to generate constraints. On a theoretical side the oracle-polynomial time equivalence between single-point and multiple-points separation has been established. Also, we proved that many problems involving multiple-points separation are NP-complete. On a more practical view, we introduced an implementation to perform multiple-points separation. Its efficiency has been tested computationally on random linear programs and survivable network design problems. Our experiments show that such procedures may substantially decrease the total number of iterations needed to solve the master program and subsequently induce relevant memory and time savings. We are currently studying ways of performing multiple-points separation that are different from those presented here. For example, a promising idea is to take better advantage of the particular problem structures to avoid the resolution

22

Walid Ben-Ameur, Jos´e Neto

of a linear program as it was presented in this paper and improve this way the whole performance of procedures relying on a multiple-points separation process. We are also studying more in-depth the efficiency of multiple-points separation procedures involving other definitions of the separation points than the ones we considered here. Another direction of research under development is the use of multiple-points separation within interior point algorithms and in the more general context of convex optimization. Acknowledgements. We wish to thank two anonymous referees and an associate editor for many valuable comments and suggestions.

References 1. A. Astorino and M. Gaudioso: Polyhedral Separability through Successive LP. Journal of Optimization Theory and Applications Vol. 112, No. 2 265–293 (2002) 2. W. Ben-Ameur and J. Neto: Acceleration of cutting plane and column generation algorithms. Submitted (2003) 3. W. Ben-Ameur and J. Neto: Multipoint Separation. Research Report 04012 RS2M, Institut National des T´el´ ecommunications, Evry, France (2004) 4. J.F. Benders: Partitioning procedures for solving mixed variables programming problems Numerische Mathematik 4, 238–252 (1962) 5. K.P. Bennett and O.L. Mangasarian: Robust linear programming discrimination of two linearly inseparable sets. Optimization Methods and Software 1, 23–34 (1992) 6. R.A. Bosch and J.A. Smith: Separating Hyperplanes and the Authorship of the Disputed Federalist Papers. American Mathematical Monthly, Volume 105, Number 7, 601–608 (1998) 7. S. Boyd and L. Vandenberghe: Convex Optimization Cambridge University Press (2004) 8. G.A. Cheston, G. Fricke, S.T. Hedetniemi and D.P. Jacobs: On the computational complexity of upper fractional domination. Discrete Appl. Math. 27, 195–207 (1990) 9. V. Chv´ atal: Edmonds polytopes and a hierarchy of combinatorial problems Discrete Mathematics 4, 305–337 10. CPLEX , CPLEX callable library, CPLEX Optimization, Inc. 11. G. Dahl and M. Stoer: A cutting plane algorithm for multicommodity survivable network design problems. INFORMS Journal on Computing 10, 1–11 (1998) 12. G. Dahl and M. Stoer: A polyhedral approach to multicommodity survivable network design. Numerische Mathematik 68, 149–167 (1994) 13. M.R. Garey and D.S. Johnson: Computers and Intractability. New York: W.H. Freeman (1979) 14. A.M. Geoffrion: Generalized Benders decomposition J. Optim. Theory and Appl. 10, 237– 260 (1972) 15. J.L. Goffin and J.P. Vial: Convex nondifferentiable optimization: a survey focussed on the analytic center cutting plane method. Optimization Methods and Software 17, 805–867 (2002) 16. R.E. Gomory: Outline of an algorithm for integer solutions to linear programs Bulletin of the American Mathematical Society 64, 275–278 (1958) 17. M. Gr¨ otschel, L. Lovasz and A. Schrijver: Geometric Algorithms and Combinatorial Optimization Springer (1988) 18. M. Iri: On an extension of the maximum-flow minimum-cut theorem to multicommodity flows. Journal of the Operations Research Society of Japan 13, 129–135 (1971) 19. M. J¨ unger, G. Reinelt, and S. Thienel: Practical Problem Solving with Cutting Plane Algorithms in Combinatorial Optimization, in W. Cook, L. Lovasz, and P. Seymour, editors, Combinatorial Optimization, DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 111–152 (1995) 20. J.E. Kelley: The cutting-plane method for solving convex programs. Journal of the SIAM 8, 11–152 (1960) 21. L.G. Khachiyan: A polynomial algorithm in linear programming. Soviet Mathematics Doklady 20, 191–194 (1979)

Multiple-Points Separation

23

22. O.L. Mangasarian: Linear and nonlinear separation of patterns by linear programming. Operations Research, 13(3):444–452 (1965) 23. O.L. Mangasarian, R. Setiono and W.W. Wolberg: Pattern Recognition via Linear Programming: Theory and Applications to Medical Diagnosis, in Large-Scale Numerical Optimization , Thomas F. Coleman and Yuying Li, (Eds.), SIAM, Philadelphia 22–30 (1990) 24. O.L. Mangasarian, W.N. Street and W.W. Wolberg: Breast Cancer Diagnosis and Prognosis Via Linear Programming. Operations Research 43, 570–577 (1995) 25. N. Megiddo: On the complexity of polyhedral separability Discrete and Computational Geometry 325–337 (1988) 26. M. Minoux: Network synthesis and optimum network design problems: Models, solution methods and applications. Networks, 19, 313–360 (1989) 27. J.E. Mitchell: Interior point methods for combinatorial optimization. in T. Terlaky (ed.) Interior Point Methods in Mathematical Programming Kluwer Academic Publishers (1996) 28. J. E. Mitchell and B. Borchers: Solving linear ordering problems with a combined interior point/simplex cutting plane algorithm. Technical Report, Mathematical Sciences, Rensselaer Polytechnic Institute, Troy, NY 12180–3590, (1997) 29. J.E. Mitchell, P. Pardalos and M.G.C. Resende: Interior point methods for combinatorial optimization. in D.-Z. Du and P. Pardalos (ed.), Handbook of combinatorial optimization Kluwer Academic Publishers (1998) 30. G.L. Nemhauser, and L.A. Wolsey: Integer and Combinatorial Optimization. New York: John Wiley (1988) 31. Y. Nesterov: Cutting plane algorithms from analytic analytic centers: efficiency, estimates Mathematical Programming 56, 149–176 (1995) 32. K. Onaga, and O. Kakusho: On feasibility conditions of multicommodity flows in networks. Transactions on Circuit Theory 4, 425–429 (1971) 33. M. Padberg and G. Rinaldi: A branch-and-cut algorithm for the resolution of large-scale symmetric traveling salesman problems. SIAM Rev 33, 60–100 (1991) 34. J.B. Rosen: Pattern Separation by Convex Programming. Journal of Mathematical Analysis and Applications 10 123–134 (1965) 35. P.M. Vaidya: A new algorithm for minimizing convex functions over convex sets Mathematical Programming 73, 291–341 (1996) 36. A.F. Veinott: The supporting hyperplane method for unimodal programming. Operations research 1, 147–152 (1967) 37. Y. Ye: Interior-Point Algorithm: Theory and Analysis. John Wiley & Sons (1997)