Unifying search algorithms for CSP - Semantic Scholar

1 downloads 0 Views 216KB Size Report
[8] William D. Harvey and Matthew L. Ginsberg. Limited discrepancy search. In Proceedings IJCAI-95; Vol. 1, pages 607–615, Montréal, Québec,. Canada, 1995.
Unifying search algorithms for CSP Narendra Jussien ´ Ecole des Mines de Nantes 4 rue Alfred Kastler F-44307 Nantes Cedex 3 France

Olivier Lhomme Ilog SA 1681 route des Dolines F-06560 Valbonne France

Abstract Ginsberg and McAllester [5] have shown that systematic and nonsystematic search algorithms can be quite close. In this paper, we go one step further in the direction of understanding the relationships between systematic and nonsystematic search algorithms by introducing the PLM model. We introduce a generic search algorithm for solving csp, and we show that search algorithms can be decomposed in primitives. A reduced set of primitives is sufficient to express almost all existing search algorithms. The knowledge of these primitives is of great help for comparing search algorithms as well as for configuring new search algorithms.

1

Introduction

In recent years many new search algorithms for solving Constraint Satisfaction Problems (csp) have been proposed, ranging from improvements of chronological backtracking, local search algorithms, genetic algorithms to hybrid approaches combining several search methods. Some questions come to mind: Why so much algorithms were needed? What are their similarities? Their differences? Does a unifying framework exist? In a new search method, is everything new? To compare the different algorithms, or to design a new search method, we need to know some principles, some properties of the different algorithms. Such properties seem to be easy to study: in AI, a general search framework has been used for years. Nevertheless, if every search algorithm is an instance of this general framework, why so much search algorithms exist? Backjumping, learning [21, 23, 18], filtering techniques, heuristics for variable or values orderings, repair methods, greedy method, Tabu Search (TS) [7], GSAT [22]), conflict based search, revision, symetries breaking, Dynamic Backtracking (DBT) [6], MAC-DBT [13], Dynamic Domain Splitting (DDS) [14], PBS [9], LDS [8], IDFS [17], ... are ideas that are quite interesting. However they are not easy to compare nor to combine; some seem to be incompatible, but how can we prove that? And the usual general search framework does not help much: based over 1

states space and operators, it is too general to catch the precise properties we are looking for. In this paper, we show that most search algorithms can be decomposed in three components P, L and M where: • P is a propagation component, • L is a learning component, • M is a moving component. The first two components, P and L, prune the remaining search space in order to avoid subparts that would necessarily lead to a dead end; they exploit respectively information about the remaining search space and information about search which has already been performed. M, the third component is responsible of the move in the search space. It chooses a direction in the remaining search space and performs a move in that direction. The P and L components constrain the allowed movements in the search space. Their purpose is to reduce the freedom of the M component to choose moves. Those three components are themselves decomposed in more basic operators, which underline similarities and differences between different search algorithms. A generic search algorithm, PLM, which propagates constraints, learns and moves in the search space, is introduced in this paper. It provides a simple and common mold for many existing search algorithms. Also, it leads to new search algorithms that are different combinations of existing P, L and M components. Indeed, this schema has already given a new and quite efficient algorithm, pathrepair (PR) that combines local search, filtering algorithm and use of conflicts. PR was born when looking for an example of the interest of the generic PLM search to design new search algorithms. The paper is organized as follows. Section 2 describes the three components of search algorithms. Section 3 presents our generic search algorithm. Finally, section 4 shows how existing algorithms fit in that generic framework and introduces a taxonomy of search algorithms.

2

Three components: P, L and M

Our claim is that most search algorithms can be decomposed in a learn component, a propagate component and a move component. This section describes those three components. We consider here Constraint Satisfaction Problems (csp) represented by a couple (V, C). C is a set of constraints and V a set of variables. The goal of a search algorithm is to find a solution 1 satisfying all the constraints in C. We 1 The notion of solution is clearly dependent on the kind of problem that is to be solved: for a csp, a solution is found when domains of variables are reduced to a singleton; for a numeric csp, a solution is found when all intervals (representing the domain) are of a small enough size; for scheduling problems, a solution is found when all disjunctive constraints have been made conjunctive; etc.

2

assume throughout the paper that variables may be discrete or continuous, and that domains are represented by unary constraints. Moreover, the enumeration mechanism that is classically used when solving csp can be considered as a series of constraints additions and retractions. Those constraints are called decision constraints. Indeed, enumeration is not limited to enumeration done through variable assignment. Enumeration can be done through any kind of decision constraints: ordering constraints between tasks for scheduling, splitting constraints for numeric csp, etc. The current state of a search algorithm is represented by a 5-uple P = {V, C, CP , CL , CD } where: • V is the set of variables, • C is the initial set of constraints, • CP is a set of redundant constraints added by the propagation component of the algorithm. For example, domain reduction that occurs trough arcconsistency maintenance [16] can be considered as the addition of unary constraints. • CD is the set of decisions constraints that are active at the considered step of the search. Notice that the current position in the search space is completely identified by C ∪ CD (CP being redundant). • CL is a et of constraints that have been learnt during search (eg. inconsistent states, etc.). It can be seen as a memory of the past of the search. Notice that like a human memory, information in this set can be forgotten during search. A state for which the set of solutions of the csp (V, C ∪ CD ) is not empty is called a consistent state. In the following, we equally handle sets of constraints as sets or as logical formulae. The empty set is then denoted by true. Let us informally outline the generic search that will be given on Section 3. It will consist in a loop over the three following steps: 1. Use the P component: it propagates the current set of constraints (C∪CD ) thus modifying the set of redundant constraints CP . 2. If the propagation leads to an inconsistent state, the L component is used to gather information from the identified dead-end thus modifying the set of constraints CL . 3. Use the M component: it moves in the search space thus modifying the set of constraints CD . Movement can be: a forward move (eg. classical partial solution extension), a backward move (eg. classical backtracking, back-jumping, etc.), or other kinds of moves. The next three subsections precisely describe the semantics of the P, L and M components in terms of modifications of P = {V, C, CP , CL , CD }, the state of the search algorithm. 3

2.1

The P component

The aim of the P component is to propagate information throughout the constraint network when a decision is made. A propagation component consists in an filtering operator (filter) and a state interpreter (check). filter filters the search space in order to avoid subparts that can be proved as leading to a failure. Filtering is effectively done by adding (redundant) constraints to the current state while taking into account the current set CD of decision constraints. The used algorithm does not need to be complete. We have: Pfilter ({V, C, CP , CL , CD }) = {V, C, CP0 , CL , CD } where C ∧ CD =⇒ CP0 . In practice, constraints added by the P component are domain reduction information. check is used to check the resulting state after the application of filter. It provides three possible different results: • solution found: a solution has been found. • no solution: no solution exists given the current set of decision constraints CD (most of the time, this occurs when there exists a domain with an empty variable). In that particular case, we require the P component to provide a conflict (i.e. a set N ⊂ CD of decision constraints such that the csp (V, C ∪ N ) has no solution). Notice that the whole CD set is always a conflict. However, in some search, smallest and therefore more precise conflicts are preferred2 . • not enough information: The current state does not allow to make a decision wether there exists or not a solution to the problem (including the decisions made so far). More decisions need to be made. It is important to notice that the computational cost of check should remain very low. For example, when dealing with classical csp check can be written as: If there exists an empty domain in the current state, return no solution. If all domains are singletons return solution found. Otherwise return not enough information.

2.2

The L component

The aim of the L component is to make sure that the search mechanism will avoid (as much as possible) to get back to states that have been explored and proved to be solution-less. Learning, in a rough analogy with the brain, is an 2 Computing precise conflicts is studied since works on TMS [4] and ATMS [3]. More recent works include [10] and [11] and show that efficient techniques do exist to compute precise conflicts when using filtering techniques.

4

interaction between two processes, a recording process and a forgetting process. In our model, a learning component consists in three operators: a recording operator record, a forgetting operator forget and a checking operator check memory. check memory is a predicate that answers whether a given state is known to be contradictory by checking the memory CL . record records a new piece of knowledge N . Notice that this recording may itself lead to forget other pieces of knowledge (typically for space bounds reasons; because N is more general than a piece of knowledge in CL , which now can be erased). We have: Lrecord ({V, C, CP , CL , CD }, N ) = {V, C, CP , CL0 , CD } where CL0 does not necessarily implies CL . However, some pre- and post- conditions hold over the current and resulting states: CD constitutes a new dead-end (C =⇒ ¬N , N ⊂ CD , Lcheck memory (CL , CD ) is true), CL0 is a set of redundant constraints (C =⇒ CL0 ). The aim of record is to record that the current position is a dead-end (and to ensure that the M component cannot retry immediately – at least – and preferably never again that same position). Therefore, we must have Lcheck memory (CL , CD ) = false. A typical example consists in choosing CL0 such that CL0 =⇒ N . forget forgets some pieces of knowledge. We have: Lforget ({V, C, CP , CL , CD }, N ) = {V, C, CP , CL0 , CD } Preconditions for the call to Lforget will be: Lcheck memory (CL , CD ) = true and C =⇒ CL . Lforget being a forgetting process will only ensure that CL =⇒ CL0 .

2.3

The M component

Unlike P and L components, the M component does not aim at pruning the search space but rather at exploring it. The current position in the search space is described by CD , the set of decision constraints. In several search algorithms, the move is different depending on the consistency or the inconsistency of the csp associated to the current node of the search space. For example, in chronological backtracking, when the current csp is consistent, another variable is assigned a value, whereas when a dead-end occurs, a previous state is first restored. Thus component M consists in two operators. extend performs a move when the csp associated to the current node in the search space is consistent and repair performs a move when the csp associated to the current node in the search space is inconsistent. extend and repair can only modify the current position in the search space i.e. they can only change the decision constraints. We have: 0 Mextend ({V, C, CP , CL , CD }) = {V, C, CP , CL , CD } 0 0 Mrepair ({V, C, CP , CL , CD }) = {V, C, CP , CL , CD }

5

(1) (2)

Let S = {V, C, CP , CL , CD } be the current state. Preconditions for extend are: • Pcheck (S) = not enough information (the current position is not contradictory) • Pfilter (S) = S (propagation has been performed) • Lcheck memory (CL , CD ) = true (the current position is not contradictory with what has been learnt) Preconditions for repair are: • Pcheck (S) = no solution (the current position is contradictory) • Lcheck memory (CL , CD ) = false (the current position is contradictory with what has been learnt; actually, this has been learnt just before the call to repair as we will see in the general algorithm – Section 3) For extend, the set of allowed moves is constrained by the set C of original constraints, CP the propagation done so far and CL the dead-ends encoun0 tered (and remembered) so far. More precisely, in equation 1, CD is such that 0 Pcheck ({V, C, CP , CL , CD }) 6= no solution i.e. the new position is compatible 0 with the old one; and Lcheck memory) (CL , CD ) = true i.e. the new position is not known as being a dead-end. For repair, the set of allowed moves is constrained by C and CL as for extend. However, the set CP may need some modifications in order to ensure 0 that C ∪ CD =⇒ CP0 . Indeed, inferences (redundant constraints) in CP that 0 depend on decisions made in CD but not in CD should disappear in CP0 since they may not be redundant constraints any more. This can be quite easy. Consider, for example, Standard Backtracking (BT), the repair operator consists in replacing the most recent (positive) decision constraint in CD by its negation and remove from CP all inferences made since the last call to extend (which added the undone decision !) and remove from CD all negative decision constraints added since that same time. However, solving this problem in a more general way is related to handling dynamic retraction of constraints. For example, [2] introduced dynamic arc-consistency enforcement. More recently [13] introduced the use of explanations (set of constraints justifying value removals) . Notice that such explanations can be used to compute accurate conflicts [11] as required by the P component. Both operators of the M component may share information about the followed path to reach the current node. For example, in chronological backtracking, a stack is needed to return to previously encountered nodes. For the sake of simplicity of our model, we do not explicitly refer to such a shared information, but keep in mind that some hidden data may be shared between operators in the M component.

6

procedure PLM(V ,C,CD ) begin (2) P b ← {V, C, ∅, ∅, CD } (3) repeat (4) P b ← Pfilter (P b) (5) switch Pcheck (P b) (6) case no solution : (7) P b ← Lforget (Mrepair (Lrecord (P b, N ))) (8) case solution found : (9) return P b (10) case not enough information : (11) P b ← Mextend (P b) (12) endswitch (13) until conditions of termination (14) end (1)

Figure 1: The PLM generic algorithm

3

PLM: a generic search algorithm

The three components (P, L and M) are combined to provide a generic search algorithm for solving csp: the PLM (Propagate, Learn and Move) algorithm.

3.1

The algorithm

The generic PLM algorithm is given in figure 1. It ensures that all pre- and postconditions of the P, L and M operators hold. The principles of the algorithms are the following: • the search starts from an initial set of decision constraints that may range from the empty set (typically for backtrack-based search) or a total assignement (typically for local search algorithm) • decisions are made (extend) and propagated (filter) until a contradiction occurs • when a contradiction occurs (line 6), the information related to the deadend (a conflict N ) is learnt (record), the current state is repaired (repair) and some information is forgotten (forget). • the search terminates as soon as a solution is found (line 8) or the conditions of termination (line 13) are fulfilled. Conditions of termination can be for example a maximum number of iterations, the exhibition of a proof that no solution exists, etc.

3.2

Systematic vs Non-systematic search

Each different given combination of P, L and M components gives a different search algorithm. It is interesting to know whether a given combination is systematic or not. In general, that property of being systematic or not may 7

come from a subtle combination of the respective work done by the three components. It is thus quite difficult to characterize this property in a generic way. Nevertheless,we provide the following proposition which states that if stronger propagation and learning operators are used, a given systematic search algorithm remains systematic. Let us define what are stronger operators. A propagation component P is stronger than a propagation component P (P À Q) iff: Pcheck (S) = n. e. i. Qcheck (S) 6= n. e. i. ∀S, Pfilter (S) = SP , Qfilter (S) = SQ

=⇒ Qcheck (S) = n. e. i. =⇒ Pcheck (S) = Qcheck (S) =⇒ (CP ⇒ CQ )

where SP = {V, C, CP , CL , CD }, SQ = {V, C, CQ , CL , CD } and n. e. i. = not enough information. A learning component L is stronger than a learning component K (L À K) iff: Lcheck

memory (C` , Cd )

= true ∀S, Lop (S) = SL , Kop (S) = SK

=⇒

Kcheck

memory (C` , Cd )

=⇒

(CL ⇒ CK )

= true

where SL = {V, C, CP , CL , CD }, SK = {V, C, CP , CK , CD } and op is either record or forget. Proposition 1 If P/L/M defines a systematic search, then any P’/L’/M such that P’ À P and L’ À L defines a systematic search. Proof (sketch): If there is no solution to C, all the positions corresponding to the allowed moves for P/L/M will be tried or inferred as dead-ends. As P’ À P and L’ À L P’ and L’ infer at least the same dead-ends without trying them. So M will explore also in P’/L’/M all the allowed moves.

4

Instantiating the PLM model

Now it is time to specialize our generic algorithm. In the following, we express several well-known (and some new techniques) as instances of PLM. We present both systematic and non-systematic algorithms. We then synthesize different values for the P, L and M components hence defining a new search algorithm taxonomy.

8

4.1

Systematic algorithms

BT and MAC Standard Backtracking does not perform any propagation and only tests the satisfiability of fully instantiated constraints. When a failure occurs, a conflict is merely defined as the whole current set CD of decisions made so far. Moving upon a success of the propagation amounts to the addition of an instantiation constraint. Moving upon a dead-end consists in reconsidering the latest possible choice as explained in Section 2.3. The L component is anecdotic since learnt information is used only once. We have the following simple equations: Pfilter (S) = S Lrecord ({. . . , CL , . . .}, N ) = {. . . , {¬N }, . . .} Lforget ({. . . , CL , . . .}) = {. . . , ∅, . . .} Mextend ({. . . , CD }) = {. . . , CD ∪ {c}} 0 Mrepair ({. . . , CP , . . . , CD }) = {. . . , CP0 , . . . , CD ∪ {¬c0 }} 0 where c is a decision constraint that allows to go to a child node of CD , CD correspond to the position of the latest depth first search ancestor of CD that still have an unexplored child and c0 is the decision constraint that represent the previous child that has just been rejected. A heuristic variable ordering is easily incorporated in BT by specifying the decision constraint c used in Mextend . Interleaving arc-consistency maintenance within BT consists in modifying the filtering operator of the P component:

Pfilter ({V, C, CP , CL , CD }) = {V, C, CP0 , CL , CD } where CP0 is the arc-consistent closure of C ∪ CD . Notice that this describes the MAC algorithm [20]. Conflict-directed Backjumping Conflict-directed BackJumping (CBJ) [18] is an enhancement of BT, from which a single but significative difference can be noticed: the filtering operator of the P component provides much more precise conflicts leading to an improved behavior regarding backtracking points. Moreover, the L component is enhanced in order to keep track of past conflicts. MAC-CBJ [19] integrates arc-consistency filtering in CBJ. The only difference is the definition of the filtering operator of the P component which computes the arc-consistent closure of C ∪ CD as in MAC. Dynamic Backtracking DBT [6] replaces BT by a local modification of the current solution. The completeness and the efficiency of the approach is ensured by using a specific recording mechanism. From CBJ operators, the first difference is in the definition of the L component. Formally one can see that:

9

Lrecord ({. . . , CL , . . .}, N ) = Lforget ({. . . , CL , . . .}) =

{. . . , CL ∪ {¬N }, . . .} {. . . , CL0 , . . .}

Given that each constraint in CL has the form ¬(xj1 = vj1 ∧ . . . ∧ xjk = vjk ), CL0 is built from CL by removing all constraints in CL that have at least two variables xi (i ∈ {j1 . . . jk }) such that the current value of xi is different from vi . Notice that if we generalizes to not only constraint having at least two such variables but at least k+1 such variables, one obtain a well-known generalization of DBT: k-relevance bounded learning [1]. The second difference with CBJ is the repair operator of the M component, which consists here in only erasing the most recent culprit decision constraint. Mrepair ({. . . , CP , . . . , CD }) =

0 {. . . , CP0 , . . . , CD }

0 0 is CD \ {c} where c is the latest decision constraint such that CD where CD is compatible with CL (by construction, we know that there exists at least one such decision constraint, it belongs to the last recorded conflict). Remember that CP therefore needs to be cleaned up with inferences that depends upon c leading to CP0 .

MAC-DBT MAC-DBT [13] and DDS [14] add arc-consistency and bound-consistency maintenance to DBT. Their description in the PLM model derives from the one of DBT by modifying the P component as it was done for MAC or MAC-CBJ above.

4.2

Non-systematic algorithms

The PLM model is also capable of handling non-systematic searches. Tabu search In TS [7], a position in a search space CD is a complete assignment of the variables. A Tabu Search is characterized by the size K of the Tabu list that defines the past position that are declared as tabu and to which the algorithm is not allowed to get back. We have: Pfilter (S) = Lrecord ({. . . , CL , . . .}, N ) = Lforget (S) = Mextend Mrepair ({. . . , CD }) =

S {. . . , CL0 ∪ {¬N }, . . .} S not used 0 {. . . , CD }

where CL0 is equal to CL if |CL | < K and to CL \ {c} otherwise (c being the 0 oldest constraint in CL ) and where CD is a neighboring3 position of CD that 3 The

definition of the neighborhood is dependent on the solved problem.

10

is fully compatible with CL . Moreover, the conflict provided by the algorithm would be, as for BT, the whole set CD . GSAT GSAT [22] is a boolean csp algorithm. As in TS, the position in a search space CD is a complete assignment of the variables of the problem. This algorithm is quite close to TS except that no information is recorded and the movements in the neighborhood are more strictly controlled: GSAT is characterized by two number max-tries and max-split; if the current number of iterations is a multiple of max-split, the repair operator of the M component chooses a new random position; else, it flips the variable that results in the greatest decrease in the number of unsatisfied constraints in C. Path-repair PR [15] combines the propagation-based nature of MAC-DBT and the true freedom (in the search space exploration) given by a local search algorithm such as TS. We have: Pfilter ({V, C, CP , CL , CD }) = {V, C, CP0 , CL , CD } Lrecord ({. . . , CL , . . .}, N ) Lforget (S) Mextend ({. . . , CD }) Mrepair ({. . . , CP , . . . , CD })

= = = =

{. . . , CL0 ∪ {¬N }, . . .} S {. . . , CD ∪ {c}} 0 {. . . , CP0 , . . . , CD }

where CP0 is the arc-consistent closure of C ∪ CD ; CL0 is equal to CL if |CL | < K and to CL \ {oldest constraint} otherwise; c is a decision constraint that allows 0 to go to a child node of CD ; CD is CD minus c0 a heuristically chosen decision 0 constraint such that CD is compatible with CL (when such a c0 does not exist, the condiiton of termiantion are reached). Remember that CP therefore needs to be cleaned up with inferences that depends upon c0 leading to CP0 . PR is a good example of new algorithms that can be obtained using the PLM model.

4.3

A taxonomy of search algorithms

From the previous examples of instantiations one can define a (non limitative) list of possible values for our operators. Each of this values will be given a short code so that we can introduce a new notation to characterize any given search algorithm using the values for the three components: P/L/M. Propagation component The propagation component is mainly characterized by: • its filtering operator: a simple consistency check (o) as in BT, a classical local consistency enforcement algorithm (eg. arc-consistency – ac, boundconsistency – bc), etc. • its conflict computation mechanism: none (X-o) or pragmatic one like in [11] (X-e) for any filtering X. 11

Algorithm BT MAC CBJ MAC-CBJ DBT MAC-DBT DDS TS GSAT PR

PLM model o-o/su/bt ac-o/su/bt o-e/su/bt ac-e/su/bt o-e/rb/jp ac-e/rb/jp bc-e/rb/jp o-o/tb/jp o-o/su/jp ac-e/tb/jp

Table 1: PLM description of classical algorithms Learning component The learning component is characterized by the life time of the recorded information: not used (o), single use (su), time-bounded use (tb), size-bounded use (sb) as in [21], relevance-bounded use (rb), etc. Moving component Our examples illustrate two types of movements: a backtrack -like set of movements (bt) or a jump-like set of movements (exploring the neighborhood of a position – jp). A taxonomy Using our short codes allows a quick description of the presented examples enlightening relations and differences. Table 1 gives the PLM description of different algorithms. This three-dimensional space of search algorithms presents many untried spots. Our generic algorithm both provides a way of identifying most of them and a framework to express their behavior4 .

5

Conclusion

The contribution of this paper is to consider search algorithms as combinations of three components: P,L and M. This is a step towards a characterization of the space of search algorithms. We hope it may help in understanding relationships between search algorithms. Also, new combinations of existing P,L and M components lead to many new algorithms; among them, some are worth trying. The PaLM system [11] can be used for experimentations. Our scheme can be completed in several directions. For example, we already have seen that some implementation tricks cannot be expressed in the current model (hidden connection between the two M operators). Also, we have assumed that knowledge to be learnt can only come from a failure of the filtering operator; this is not mandatory: the generic algorithm has only to be modified 4 Moreover, there exists an implementation of that general algorithm which allows the specification of any P,L and M components for a search algorithm: the PaLM system [12]. That implementation also provides conflict computation routines as described in [11].

12

to call L operators also when propagation succeeds. A third direction to complete the scheme is to allow knowledge that cannot be expressed as constraints to be learnt and used in the M operators.

References [1] Roberto J. Bayardo Jr. and Daniel P. Miranker. A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem. In AAAI’96, 1996. [2] Christian Bessi`ere. Arc consistency in dynamic constraint satisfaction problems. In Proceedings AAAI’91, 1991. [3] Johan de Kleer. An assumption-based tms. Artificial Intelligence, 28:127– 162, 1986. [4] J. Doyle. A truth maintenance system. Artificial Intelligence, 12:231–272, 1979. [5] Matthew Ginsberg and David McAllester. GSAT and dynamic backtracking. In Alan Borning, editor, Principles and Practice of Constraint Programming, volume 874 of Lecture Notes in Computer Science. Springer, May 1994. (PPCP’94: Second International Workshop, Orcas Island, Seattle, USA). [6] Matthew L. Ginsberg. Dynamic backtracking. Journal of Artificial Intelligence Research, 1:25–46, 1993. [7] F. Glover and M. Laguna. Modern heuristic Techniques for Combinatorial Problems, chapter Tabu Search, C. Reeves. Blackwell Scientific Publishing, 1993. [8] William D. Harvey and Matthew L. Ginsberg. Limited discrepancy search. In Proceedings IJCAI-95; Vol. 1, pages 607–615, Montr´eal, Qu´ebec, Canada, 1995. [9] Ulrich Junker. Preference-based search for scheduling. In Proceedings AAAI/IAAI 2000, pages 904–909, 2000. [10] Ulrich Junker. QUICKXPLAIN: Conflict detection for arbitrary constraint propagation algorithms. In IJCAI’01 Workshop on Modelling and Solving problems with constraints, Seattle, WA, USA, August 2001. [11] Narendra Jussien. e-constraints: explanation-based constraint programming. In CP01 Workshop on User-Interaction in Constraint Satisfaction, Paphos, Cyprus, 1 December 2001.

13

[12] Narendra Jussien and Vincent Barichard. The palm system: explanationbased constraint programming. In Proceedings of TRICS: Techniques foR Implementing Constraint programming Systems, a post-conference workshop of CP 2000, pages 118–133, Singapore, September 2000. [13] Narendra Jussien, Romuald Debruyne, and Patrice Boizumault. Maintaining arc-consistency within dynamic backtracking. In Principles and Practice of Constraint Programming (CP 2000), number 1894 in Lecture Notes in Computer Science, pages 249–261, Singapore, September 2000. Springer-Verlag. [14] Narendra Jussien and Olivier Lhomme. Dynamic domain splitting for numeric csp. In European Conference on Artificial Intelligence, pages 224–228, Brighton, United Kingdom, August 1998. [15] Narendra Jussien and Olivier Lhomme. Local search with constraint propagation and conflict-based heuristics. In Seventh National Conference on Artificial Intelligence – AAAI’2000, pages 169–174, Austin, TX, USA, August 2000. [16] Alan Mackworth. Consistency in networks of relations. Artificial Intelligence, 8(1):99–118, 1977. [17] Pedro Meseguer. Interleaved depth-first search. In Proceedings IJCAI, pages 1382–1387, 1997. [18] Patrick Prosser. Hybrid algorithms for the constraint satisfaction problem. Computational Intelligence, 9(3):268–299, August 1993. (Also available as Technical Report AISL-46-91, Stratchclyde, 1991). [19] Patrick Prosser. MAC-CBJ: maintaining arc-consistency with conflictdirected backjumping. Research Report 95/177, Department of Computer Science – University of Strathclyde, 1995. [20] Daniel Sabin and Eugene Freuder. Contradicting conventional wisdom in constraint satisfaction. In Alan Borning, editor, Principles and Practice of Constraint Programming, volume 874 of Lecture Notes in Computer Science. Springer, May 1994. (PPCP’94: Second International Workshop, Orcas Island, Seattle, USA). [21] Thomas Schiex and G´erard Verfaillie. Nogood Recording for Static and Dynamic Constraint Satisfaction Problems. International Journal of Artificial Intelligence Tools, 3(2):187–207, 1994. [22] Bart Selman, Hector Levesque, and David Mitchell. A new method for solving hard satisfiability problems. In AAAI 92, Tenth National Conference on Artificial Intelligence, pages 440–446, 1992. [23] G´erard Verfaillie and Thomas Schiex. Solution reuse in dynamic constraint satisfaction problems. In AAAI’94, pages 307–312, Seattle, WA, 1994.

14