1

Hybrid Constraint Algorithm for the Maintenance Scheduling of Electric Power Units Nuno Gomes, Zita Vale, ngomes,[email protected] Polytechnic Institute of Porto / Institute of Engineering, Portugal

Abstract-- Maintenance scheduling of generating units is an important problem of power systems operation. Among different approaches to this problem it seams that Constraint Programming (CP) based ones are the most flexible, despite some lack of efficiency. In this paper a hybrid CP method, named Guided Constraint Search is presented. The method combines ideas from Guided Local Search with CP in order to overcome the efficiency problems of pure CP approaches. Some tests with real data are also presented and comparisons with other approaches are made. Index Terms— Power systems; Maintenance scheduling; Hybrid Constraint Programming; Optimization

I. INTRODUCTION In the recent history, the electricity market has suffered profound changes. Liberalization processes, globalization, appearing of small private producers, the increasing institutional investment in renewal energy, among others, have increased the interest in finding new ways to cut costs. More than economic factors, modern companies need to provide a high quality service in order to maintain the competitiveness. Maintenance scheduling of generating units (MSGU) is one of the important problems of power system operation. The MSGU outage has a great effect on system reliability, unit availability, and production costs. Past experiences have shown that effective scheduling can save a considerable amount of operational costs helping the electricity companies to be more competitive in terms of energy price while increasing system reliability. Another important goal of MSGU is to minimize the risk of the capacity shortage throughout the year. The main objective of the MSGU problem consists of determining, for each predicted maintenance task, a specific start time in the scheduling horizon (e.g. an year), while satisfying the system constraints and maintaining system reliability. This objective should be accomplished while some criteria are optimized (e.g. the sum of operation and maintenance costs is minimized) [1]. Traditional optimization techniques, such as Integer Programming (IP), Dynamic Programming (DP) and Branch-andBound (BB), have been proposed to solve this problem since the early time of MSGU research. For example, in 1975, Dopazo and Merrill [2] developed 0-1 IP formulation, which guarantees to find the optimal solution where one exists. In T his work was supported in part by the Foundation for the Development and Technology (FCT), under Project name ISEPMain

1983, Yamayee and Sidenblad [3] went on with the use of DP with Successive Approximations together with a Cumulative method of evaluating the stage cost function. The last had the effect of reducing the running time by an order of magnitude. In spite of the successive improvements, as the size of the problem increases, the size of the search space increases exponentially and hence also the running time of these algorithms. To overcome this difficulty, stochastic methods have been introduced. For example, in 1991, Satoh and Nara [4] applied the Simulated Annealing (SA) method to the MSGU problem, showing that it was able to deal with problem sizes impracticable for the IP. Following, the same authors refined their technique to include a Genetic Algorithm (GA) and a Tabu Search element [5]. In the same year, Dahal et al [6] presented a GA using a binary representation. Further, the authors in [6] showed that an integer representation reduced the search space of the GA and also reduced the execution time of the algorithm. Finally, hybrid evolutionary techniques were investigated in [7] and [8]. Despite the performance improvement of some of the above methods they have some drawbacks. Specifically, they have some difficulties dealing with the complexity of some constraints and providing the needed flexibility to model development of actual real MSGU problems. In [9], the authors have shown that Constraint Programming (CP) is a good solution to overcome the flexibility problems of the above methods, while maintaining almost the same efficiency. In this paper, we present a hybrid constraint method, with the same flexibility of that of [9], but capable of competing in terms of efficiency with the best ones that have been presented. II. CONSTRAINT PROGRAMMING AND PROGRAMMING WITH CONSTRAINTS Combinatorial Search Problems, like the MSGU problem, within the context of CP framework must be formulated as Constraint Satisfaction Problems (CSP). A CSP is constituted by a tuple (X,D,C), where X is the set of the n problem variables, D is the set of their respective finite domains [D(X1), D(X2),…, D(Xn)], and C is the set of the problem constraints. A solution to the problem is a set of pairs variable/value [(X1,v1), (X2,v2),…, (Xn,vn)], being v a value of the X domain, which satisfies all the constraints. Considering this formulation, we can say that a framework to solve CSPs should provide tools to define the variables, the constraints and to maintain all the values consistent with the constraints during the search. One

2

framework that provides all of these tools is Constraint Programming. The structure of a CP Program is the following: 1. Define the variables 2. Define the constraints 3. Until all the variables are instantiated or all values tried do 3.1. Select a variable 3.2. Select a value 3.3. Instantiate the variable. If the instantiation fails backtrack to point 3.2 ant select another value 4. Return the solution if it exists

First, it is necessary to define the problem variables and their own domains, next the constraints over the variables. This constitutes the model of the problem. Then, it is necessary to instantiate the variables. In order to do that a variable must be chosen among the set of the non-instantiated variables, and then instantiated with a value of its domain. If the instantiation fails (e.g. a constraint is violated) the search backtracks and other values are tried. This constitutes the search procedure. During the search procedure, the consistency of all constraints must always be kept, requiring removing all the values of the variables domains that violate a constraint. This action is usually called Constraint Solving. In the present there are several systems that provide the tools to develop the problem models and maintain the constraints consistency efficiently. These systems are classified as Constraints Programming Systems if they are based on procedural languages, or Constraint Logic Programming Systems if they are based on natural languages. Some examples are ILOG Solver [10] for the first case, and ECLiPSe [11] or CHIP [12], for the second one. ECLiPSe, as others CP systems, provides a large set of predefined constraints, and also the tools to develop new ones. If the constraints are simple, CP techniques like Arc and Path Consistency [13] are used to solve them, otherwise if they are complex, several techniques from Graph Theory, Operations Research or applied Discrete Mathematics, among others, are used. These complex constraints are often called Global Constraints [12]. The large number of different areas where CP has been applied with success (see [14] for a survey) demonstrates the potential of this technology. III. GUIDED CONSTRAINT SEARCH The initial search space of a CSP is the Cartesian product of all the variables domains [D(X1)xD(X2)x…x D(Xn)]. The right constraints satisfaction algorithms can prune this search space removing some of the inconsistent values from it. Even then, and due to the computation associated with the constraints satisfaction algorithms, when the size of the search space is large, it can be impractical to search for the optimal solution. In order to overcome this drawback, we propose a new procedure. The idea is to use a fitness function to choose, in each iteration, only the z most promising values of each variable domain. Following, to use CP to solve the corresponding sub-problem and find a local optimum. If the fitness function is adequate it is possible that the local optimum is also the global one, and a lot of search time is saved. Naturally, if the fitness function is not adequate the results can be worse than if none division was made. The idea is illustrated in figure 1.

Figure 1 - Search Space division for two variables The figure 1 represents the search space of a problem with two variables (X1,X2), each one, with k domain values. Black points are feasible solutions. The problem search space size is k2. A adequate GCS fitness function would choose for the domain of each variable the values corresponding to the block A in the figure 1. Then, the first local optimum returned by the CP module would be also the global one. This would allow saving a lot of work exploring the rest of the search space. Naturally, an inadequate fitness function could choose the values corresponding to block B, then C, then D and finally A. This would be even worse, that if none division would be made. As we can see the fitness function is essential for the GCS algorithm efficiency. A. Subspace Definition The selection of the values that define the sub-space in each interaction is made using ideas from Guided Local Search [15]. Similarly to it, we define a utility (inutility) function, penalties and costs. However, we differ from it in the form we use these values. For each pair variable/value (Xi,vj) the inutility function returns a quantity that indicates if the corresponding value should be included, or not, in the variable domain in the next iteration. The inutility function is defined as follows:

I ij = C0 ij + Bc ij * p ij

(1)

Where I is the inutility value; C0 is an initial heuristic cost; Bc is the best solution cost that includes the pair and p is the penalty parameter. Initially p and Bc are initialized, for all pairs, respectively to 1 and to the problem upper bound cost. This means that, initially, the value of the inutility function depends only on C0. Consequently, C0 can be used to express some heuristics that give the possible best initial values. In the end of each iteration, the inutility function parameters of the used pairs are updated, depending if they belong to a new best solution, or not. Specifically, the penalty parameter is incremented by one unit for the pairs that do not belong to the best solution. For the other pairs, the penalty remains the same being the Bc parameter updated to the new best cost. Naturally, if none best solution is found all the pairs see their penalty increased. With this update procedure we accomplish two objectives. First, the probability of certain (possible best) pairs being chosen is progressively increased as they belong to good solutions (convergence of the search). Second, the search is diversified because the penalty of the pairs that do not belong to new best solutions is increased. Note that, as search evolutes, the weight of C0 becomes negligible. In fact, as the penalty parameter increases

3

the second term of the fitness function becomes much bigger than the first term. The key to the effectiveness of GCS is the equilibrium between penalizing “bad” pairs variable/values and do not penalizing “good” ones. B. GCS Algorithm Considering the above, we can define the GCS algorithm as: 1. Develop the problem’s model in terms of variables and constraints (as a CSP) 2. Initialize for all the pair’s variable/value the inutility function parameters (C0, Bc and p). 3. Repeat until a termination condition (e.g. a maximum number of iterations or time limit) is reached 3.1. Select for the domain of each problem’s variable the z values with the smaller inutility value (I) 3.2. Call the CP module to perform the search and return a new best solution (or not) 3.3. Update all the parameters, including the inutility value, for all the used pairs according to the returned solution 3.3.1. If the solution is a new best solution, update Bc for all the pairs that belongs to the solution and increment p for the other pairs 3.3.2. Else increment p for all the used pairs

Remembering example of Figure 1, the algorithm starts defining the problems variables (X1,X2) and constraints. Then, after initialize all the parameters for all the pairs, it calls the CP module to search the best solution, suppose, within the subspace B (Figure 1). A new best solution is found and the parameters are updated according. Based on the new inutilities, a new sub-space is defined, suppose A, and the CP module is called again. This time, the new best solution is also the global best solution. As we can see the algorithm is simple and can be used for solving any problem adequate for the CP framework. The algorithm is independent of the CP module, however, for large problem instances, the time the module needs to find the local optimum solution can be very large. This means that it is necessary to limit the time the CP module uses for searching, in order to avoid losses of time exploring bad search space areas. Regarding the above, the GCS algorithm has two parameters to be set, namely the number of domain values and the searching time of the CP module. IV. MSGU PROBLEM DESCRIPTION The MSGU problem is a combinatorial search problem and can be formulated as a CSP. The main problem variables are the ones that represent the starting time of each maintenance task, being their domains constituted by the possible starting periods. The set of problem constraints can be very large and complex, and it should express all the relevant restrictions of the problem. These restrictions can usually be summarized as follows: Resource Constraints – These constraints ensure that all the resources needed to perform the maintenance task are available during maintenance. Some of the resources are: maintenance teams, vehicles, replacement parts, etc. Time Constraints – These constraints impose, for each unit to be maintained, the valid maintenance periods. For

example, in winter, some units should not be maintained due to the possible bad weather conditions. Precedence Constraints – Some units, or group of units, should be maintained before/after other units, or groups. These constraints express that. Capacity Constraints – These constraints impose that the production capacity of each unit is never exceeded. Demand Constraints – These constraints ensure that the capacity of the units, that are operating, is sufficient to meet the global system demand. A real MSGU problem has more than one solution. If we consider an objective function (cost function) f, then there are one or more solutions for which the value of the function is maximal (or minimal for a minimization problem). In most of the works, the MSGU problem objective function is constituted by a sum of cost terms that should be minimized. These terms are usually the total Production Costs (PC) and total Maintenance Cost (MC) for the schedule horizon. PC represents, mainly, the fuel and operational cost, and are usually different for each unit and period (e.g. the fuel cost may vary during the year). MC associates all the maintenance costs, which are usually different for each unit and can be also different for each period (e.g. hydroelectric units are cheaper to maintain during periods of low water flow). Other objective functions can be used, see [16] for a literature review. V. PROBLEM CONSTRAINT MODEL Almost all the works refereed in section I. use the same mathematical formulation for the MSGU problem. In this paper we will use a constraint formulation for which we need to define the parameters, variables and constraints. A. Parameters and variables Parameters represent the problem data, and can be summarized as follows: T Number of periods in the schedule horizon U Number of units to be maintained Maximum number of units which can be maintained ct simultaneously in period t mit Maintenance cost of unit I in period t cit Production cost of unit i in period t ki Maximum power production capacity of unit i dt Power demand in period t ri Duration of maintenance task of unit I ait Availability for maintenance of unit i in period t Set of pairs of units which cannot be maintained simulI taneously Pr Set of pairs of that have precedence requirements The variables can be summarized as follows: Si Starting maintenance period of unit i Power production of unit i in period t (0 if the unit is in Pit maintenance) Mut Number of units in maintenance in period t Mci Total maintenance cost of unit i B. Constraints A valid maintenance schedule must meet the following con-

4

straints or domain requirements, which arise naturally from the problem definition: (2) ∀i ∈ U 1 ≤ Si ≤ T 0 ≤ Pit ≤ ki

∀i ∈ U, ∀t ∈ T

(3)

∑ Pit ≥ dt

∀t ∈ T

(4) (5)

0 ≤ Mut ≤ ct

∀i ∈ U, ∀t ∈ T ∀t ∈ T

S i + ri − 1 < S j ∨ S j + r j − 1 < S i

∀( i , j ) ∈ I

(7)

S i + ri − 1 < S j

∀( i , j ) ∈ Pr

(8)

ait = 0 ⇒ S i ≠ [t − ri + 1, t ]

(6)

Considering constraint classification from the previous section, we can say that (2) and (5) are time constraints. (2) ensures that the start period for each maintenance task must be one of the periods in the scheduling horizon. (5) ensures that a maintenance task is not executed in forbidden periods. Constraint (4) is a demand constraint and guarantees that the demand for each period is always fulfilled; (3) is a capacity constraint, and ensures that the maximum capacity of each unit is never exceeded. Finally, (6), (7) and (8) are resource constraints. (6) guarantees that the number of available resources for the period is not exceeded. (7) and (8) ensure, respectively, that two tasks are not scheduled simultaneously, and that one task is scheduled before the other. The ECLiPSe system provides global constraints that implement the resources constraints, and their names are cumulative, disjunctive and precedence respectively, see [17] for more details. The above constraints result directly from the problem definition, nevertheless, CP allows us to define other type of constraints in order to increase the search efficiency and maintain the variables domains synchronized. Due to a lack of space, we will not present in this paper all of our constraints of these types, but we will describe the important ones. Each time a maintenance task start period variable is instantiated a set of actions must be performed. These actions can be expressed by the synchronization constraints indicated in Table 1. TABLE 1 – SYNCHRONIZATION CONSTRAINTS

Constraints Si = t ⇒ Piu = 0 ∀u ∈ [t , t + ri − 1] Si = t ⇒ Muu = Muu − 1

∀u ∈ [t , t + ri − 1]

Si = t ⇒ Mci = ∑ miu

∀u ∈ [t , t + ri − 1]

u

Action During the maintenance of a unit it’s output is 0 For each maintenance period the resource availability should be decreased The total maintenance cost of a unit is equal to the sum of the costs for each of the maintenance periods

Besides the synchronization, there are also the so-called redundant constraints. These constraints are not needed to guarantee the solution validity, but they are a good form of increasing search efficiency. One example of one important redundant constraint is the following: Pit > 0 ⇒ S i ≠ [t − ri + 1, t ]

∀i ∈ U, ∀t ∈ T

(9)

This constraint simply says that, if the production of one unit for a specific period is different from 0, then in this period the unit cannot be maintained. This is clearly a redundant con-

straint, but let us suppose that by the action of (4) the domain of a certain Pit variable becomes different from 0. If (9) does not exist the search procedure only detects the inconsistency when it tries to instantiate Si with one of the values in the interval [t-ri+1,t]. With (9) the search space is pruned sooner, which can save a lot of unfruitful work. As we referred above, there is usually a large set of valid solutions. Due to this, we need to define an objective function that should be minimized or maximized. Our objective function is the following: ∑ ∑ cit * Pit + ∑ Mci (10) i

t

i

The objective of our search procedure is to minimize the above function. This means to minimize the total maintenance and production costs. VI. THE CP SEARCH PROCEDURE The search procedure is determinant for the CP module efficiency. In this work we have used two different search procedures. One results from the work in [9]. We named it as Branch and Bound procedure with Smallest variable selection heuristic and Smallest value selection heuristic (BBSS). We do not use the best procedure of [9], namely Credit Branch and Bound, because the idea of the credit mechanism is intrinsic to the GCS. The other results from the work in [18], and we called it as Branch and Bound procedure with Smallest Inutility variable and value selection heuristic (BBSI). The BBSS procedure uses an adapted version of the Branch and Bound method. This method starts by imposing a new constraint in the form of an upper cost bound to the objective function. Then, the search starts, and the variables are instantiated according to the variable and value selection heuristics. If a solution is found, the upper cost bound is tightened and the search for a better solution continues. If no solution is found, the procedure backtracks until a solution is found or no more alternatives exist. This search procedure is complete, as it finds the optimal solution or proves that none solution exists. The Smallest variable selection heuristic (SVA) and Smallest value selection heuristic (SVU) are used to select during the search, respectively, a non instantiated variable and a value of its domain. Basically, the SVA selects the maintenance cost variable (Mci) with the smallest value in the domain, and SVU selects the smallest value in the domain of the variable (smallest maintenance cost). The BBSI procedure uses the same Branch and Bound method as the BBSS but with Smallest Inutility selection heuristics. This heuristic selects first the variable that contains the pair with smallest inutility and then the value of that pair. The idea is that pairs with small inutility have more probability of belonging to good solutions. VII. EXPERIMENTAL RESULTS In order to evaluate the different search procedures and respective variants we have used data from the Portuguese Electric Power Generation Company. This data includes the general problem parameters as unit characteristics, demand profiles,

5

operational costs, maintenance cost, specific resource and time constraints, etc. However, as the data was limited in terms of problem size diversity (number of units and periods), we have developed an instance generator program. This program uses the real data to generate new problem instances with different sizes needed to compare our work with that of other authors (see [9] for more details). The method was implemented using ECLiPSe system, running on an AMD Celeron at 650Mhz, with 128Mb of memory, and with WIN98. Note that this method has not any stochastic component. Consequently, we only need to run the program once for each parameter set. A. Parameters Influence Prior to test the algorithm’s performance we need to define the number of values of the domain variables (z parameter) and the search time for the CP module (st parameter). Theoretically, z can vary from 1 to n. In practice, z values near 1, or near n usually imply bad algorithm’s performance. The reason for this is obvious. If z is near n the situation is equivalent to that of a pure CLP approach. In other words, the size of the sub-search space is almost equal to the size of the global one, so the effort that the constraint module must dispend in order to find the local optimum is almost the same it spends to find the global one. If z is near 1 the significance of the constraint module is almost null, so the overhead associated with it does not bring any advantage. The right z value is expected to be somewhere in the middle range. As refereed above, the st parameter limits CP module search time. In the beginning of each iteration (sub-search space exploring), a timer is initiated with the st parameter. Then, if none best solution is found until the timer reaches 0, the search stops, and a new sub-search space (new iteration) is explored, else, whenever a new best solution is found the timer is initiated. The idea is to spend more time exploring promising search areas. Note that, if st is too small then it is possible that the CP module does not have the time to find the optimal local solution, but if st is too high then the CP module spends a lot of time in each iteration exploring bad search space areas. TABLE 2 – MEAN CPU SEARCH TIME NEEDED TO OBTAIN THE OPTIMAL SOLUTION FOR SET S1 PROBEMS (U=6, T=12)

BBSI

BBSS

z parameter -> st=10 st=60 st =120 st =220 st=10 st =60 st =120 st =220

3

4

5

6

7

8

9

3 3 4 6 4 4 55 112

12 15 61 73 28 53 43 74

-52 67 87 26 31 14 26

16 15 16 23 21 26 72 116

-56 59 65 -90 200 498

-148 231 -393 423 756

-484 -311 432 698

BBSS BBSI

st=10 st=60 st =120 st =220 st=10 st =60 st =120 st =220

B. Comparison with other approaches In order to verify the efficiency of our search procedure we have used the results obtained in [7]. As far as we know, the work is a reference in the area of the maintenance scheduling, in their case, for thermal generators. The results presented in the paper report the application of several evolutionary techniques (e.g. Simulated Annealing, Tabu Search, Memetic Algorithms, etc) to 3 problems with different sizes. They also indicate the number, of what they call, combination and order constraints. These are, respectively, our constraints (7) and (8) defined in section IV. Table 4 summarizes problem parameters. The authors neither indicate the processor nor the programming language they use. TABLE 4 - TEST PROBLEM PARAMETERS Problem 1 2 3

TABLE 3 - MEAN CPU SEARCH TIME NEEDED TO OBTAIN THE OPTIMAL SOLUTION FOR SET S2 PROBEMS (U=10, T=24) z parameter>

In order to test the z and st parameters influence we have solved two sets (S1,S2) of 5 problems instances each, respectively with sizes (U=6,T=12) and (U=10,T=24). For each of the problems we have tried, for the z parameter, all the integer values in the interval [1,T]. We also have tried values for the st parameter varying from 1s to 300s in steps of 10 seconds. Table 2 – mean cpu search time needed to obtain the optimal solution for set s1 probems (U=6, T=12)Table 2 and Table 3 show the average CPU time (in seconds) that the BBSS and BBSI GLS search procedures needed to find the optimal solution for each problems set. When the symbol (--) appears, it means that the procedure could not found the optimal solution before the stop criterion was met (stop criterion considered 50 iterations). Due to a lack of space, we do not show in the table the results for all values of the st parameter. As we expected, for small and high z values (z9 for S1 and z 17 for S2) none of the search procedures could find the optimal solution. This problem was more perceptible for the BBSS procedure. It is also true that, as smaller are st values more difficult is to find the optimal solution for both procedures, and that, as higher are st values more time is needed to find the optimal solution. The right st value grows with the size of the problem. The BBSS procedure performs globally better (less computation time) than BBSI. However, the BBSI is less sensible to the parameters variation. This can be explained by the higher BBSS capacity to find the optimal local solution, but smaller capacity to direct the search for the global one. Based on these tests, we can say that a good z value is in the interval [ 1/4T,2/4T], and st>10+2*T seconds.

4

5

6

7

8

9

10

11

12

13

14

15

16

17

-257 457 698 -231 254 311

135 321 435 754 -487 564 987

-301 412 700 289 398 459 786

-237 457 558 194 245 435 547

-214 421 564 192 227 397 725

-55 61 32 98 22 245 298

-111 105 98 123 32 266 301

--106 112 -265 354 487

---------

--167 174 -234 321 425

---432 -238 332 439

-----331 411 547

-----896 712 3152

-----911 1104 2053

U 15 30 60

T 25 40 52

Constraints of type 6 2 3 5

Constraints of type 7 3 3 5

We have applied our 2 search procedures to 3 similar problems with the same size, number and type of constraints. The problems were created with our instance generator.

6

TABLE 5 - RESULTS Problem

1

2

3

SA

0:03 (218111)

0:18 (555280)

0:46 (1202264)

TS

0:10 (214066)

0:59 (555191)

3:08 (1201970)

MA(TS)

1:12 (209209)

6:34 (554344)

25:29 (1201279)

CBB+S+SV+Mc

1:40

8:22

48:26

BBSS

1:26 (z=8; st=70)

4:46 (z=13; st=90)

20:23 (z=15; st=110)

BBSI

1:34 (z=8; st=70)

5:12 (z=13; st=90)

18:12 (z=15; st=110)

Table 5 shows the results of our method, of CP pure approach of [9] named CBB+S+SV+Mc and 3 of the methods used in [7], namely Simulated Annealing (SA), Tabu Search (TS) and Memetic Algorithm with a Tabu Search Operator (MA(TS)). The used data is the same of [9] but different from [7]. We do not indicate the cost of our method neither of [9] because it has no interest for the comparison, regarding that data is different. Instead, we indicate between parentheses the value of the z and st parameter. Times shown are in minutes:seconds format and for the 3 stochastic methods, between parentheses, is indicated the average cost for 40 runs. Considering that problems with the same size and structure are equally demanding in terms of computation time we can say that the SA and TS algorithms are the faster ones, but with poor results. MA(TS) is slower than the others, but with results that the authors believe are near to the optimal. Our method performs better than the CBB+S+SV+Mc and, for problem 2 and 3, better than the MA(TS). We have verified that, for problems 1 and 2, the solutions were the optimal ones. Even for problem 3, we have not found any better solution when running our algorithms for more than 10 hours. In spite of not reporting the results in the table, we have tried another values for the z and st parameters and the results were similar. For example, with z values in an interval of + 2 units of the reported ones, and st varying +20%, we have accomplished the same costs with maximum variation of 30% in search time. This confirms the robustness of our method relatively to the parameters. Considering other applications, we can say that CP methods are usually slower than other methods, for example stochastic ones. Nevertheless, the quickness is not the strongest point of the CP, but the modeling capacities and flexibility are. With CP, planning engineers can spend more time studying the problem characteristics and consider side constraints that are often left aside, knowing that it will be easy to build the model, and that the solving problems are treated by the constraint system. GCS can maintain all these CP advantages while improves the search efficiency. VIII. CONCLUSIONS AND FUTURE WORK This paper presented a new hybrid constraint algorithm to solve the Maintenance Scheduling of Electric Power Units problem. The algorithm combines ideas of Guided local Search with Constraints based search resulting in what we have called Guided Constraint Search. Within the GCS two search procedures are described, namely the Branch and Bound procedure with Smallest variable selection heuristic and Smallest value selection heuristic and Branch and Bound procedure with Smallest Inutility variable and value selection heuristic. Based

on our test results, we concluded that the BBSS procedure performs slightly better than BBSI but is more sensible to the two algorithm parameters. In spite of this sensibility, the algorithm is robust enough to allow users to easily select both the parameters based on a simple relation. Finally, a comparison with other approaches suggests that our algorithm performs equal or even better than them. Summarizing, the paper demonstrates that it is possible to take advantage of the modeling capacities of the CP without losing efficiency when compared to other approaches less flexible like the stochastic ones. IX. REFERENCES [1] [2] [3] [4] [5] [6]

[7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

M. Weedy. Electric Power Systems, Chichester: John Wiley Sons, 1992. Dopazo J and Merrill H., Optimal generator maintenance scheduling using integer programming, IEEE Transactions on Power Apparatus and Systems, vol. 94, pp. 1537–15451975. Yamayee Z., Sidenblad K., and Yoshimura M., A computationally efficient optimal maintenance scheduling method, IEEE Transactions on Power Apparatus and Systems, vol. 102, pp. 330–3381983. Satoh Y. and Nara K., Maintenance scheduling by using the simulated annealing method, IEEE Tran on Power Systems, vol. 6, pp. 850, 1991. Kim H. and Nara K., An algorithm for thermal unit maintenance scheduling through combined use of GA, SA and TS, IEEE Transactions on Power Systems, vol. 12, pp. 329-335, 1997. Dahal K. and McDonald J., "Generator maintenance scheduling of electric power systems using genetic algorithms with integer representation," Proc. of 2nd International Conf. on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA'97), 1997. Burke E, Hybrid Evolutionary Techniques for the Maintenance Scheduling Problem, IEEE Tran. on Power Systems, vol. 15, pp. 122-128, 2000. Dahal P., Galloway S., Burt G and McDonald R. "Generation scheduling using genetic algorithm based hybrid techniques," Proc. of Large Engineering Systems Conference (LESCOPE2001), Canada, 2001. Gomes N. and Vale Z., "Constraint Based Maintenance Scheduling of Electric Power Units," submitted to PSPO'03, USA, 2003. Leconte M., Constraint Solving in ILOG and its Application, Escola Portuguesa em Inteligência Artificial, Lisboa, 1996. Wallace M., S. N. J. S., "ECLiPSe: A Platform for Constraint Logic Programming," Technical Report, IC Park, London, 1997. Beldiceanu N., Introducing global constraints in CHIP, Mathematical and Computer Modelling , vol. 20, pp. 97-123, 1994. Mohr R. and Henderson T, Arc and Path Consistency Revisited Artificial Intelligence, vol. 28, pp. 225-233, 1986 Rossi Francesca et al., "Constraint Logic Programming," ERCIM/Compulog Net workshop on constraints, 2000. Voudouris C.; Tsang, E.P.K., " Guided Local Search", European Journal of Operational Research, vol.113, 2, pp. 469-499, 1999. El-Sharkh M., Yasser R., and El-Keib A., "Optimal Maintenance Scheduling for Power Generation Systems - A Literature Review," Proc. of the Maintenance and Realiability Conference 1998, pp. 20.01-20.10, 1998. W. Harvey & all, "The ECLiPSe Library Manual Re-lease 5.4.," ICPARK, 2002. Gomes N. and Vale Z., "Guided Constraint Search", technical report ISEP, IS2343, 2001.

Hybrid Constraint Algorithm for the Maintenance Scheduling of Electric Power Units Nuno Gomes, Zita Vale, ngomes,[email protected] Polytechnic Institute of Porto / Institute of Engineering, Portugal

Abstract-- Maintenance scheduling of generating units is an important problem of power systems operation. Among different approaches to this problem it seams that Constraint Programming (CP) based ones are the most flexible, despite some lack of efficiency. In this paper a hybrid CP method, named Guided Constraint Search is presented. The method combines ideas from Guided Local Search with CP in order to overcome the efficiency problems of pure CP approaches. Some tests with real data are also presented and comparisons with other approaches are made. Index Terms— Power systems; Maintenance scheduling; Hybrid Constraint Programming; Optimization

I. INTRODUCTION In the recent history, the electricity market has suffered profound changes. Liberalization processes, globalization, appearing of small private producers, the increasing institutional investment in renewal energy, among others, have increased the interest in finding new ways to cut costs. More than economic factors, modern companies need to provide a high quality service in order to maintain the competitiveness. Maintenance scheduling of generating units (MSGU) is one of the important problems of power system operation. The MSGU outage has a great effect on system reliability, unit availability, and production costs. Past experiences have shown that effective scheduling can save a considerable amount of operational costs helping the electricity companies to be more competitive in terms of energy price while increasing system reliability. Another important goal of MSGU is to minimize the risk of the capacity shortage throughout the year. The main objective of the MSGU problem consists of determining, for each predicted maintenance task, a specific start time in the scheduling horizon (e.g. an year), while satisfying the system constraints and maintaining system reliability. This objective should be accomplished while some criteria are optimized (e.g. the sum of operation and maintenance costs is minimized) [1]. Traditional optimization techniques, such as Integer Programming (IP), Dynamic Programming (DP) and Branch-andBound (BB), have been proposed to solve this problem since the early time of MSGU research. For example, in 1975, Dopazo and Merrill [2] developed 0-1 IP formulation, which guarantees to find the optimal solution where one exists. In T his work was supported in part by the Foundation for the Development and Technology (FCT), under Project name ISEPMain

1983, Yamayee and Sidenblad [3] went on with the use of DP with Successive Approximations together with a Cumulative method of evaluating the stage cost function. The last had the effect of reducing the running time by an order of magnitude. In spite of the successive improvements, as the size of the problem increases, the size of the search space increases exponentially and hence also the running time of these algorithms. To overcome this difficulty, stochastic methods have been introduced. For example, in 1991, Satoh and Nara [4] applied the Simulated Annealing (SA) method to the MSGU problem, showing that it was able to deal with problem sizes impracticable for the IP. Following, the same authors refined their technique to include a Genetic Algorithm (GA) and a Tabu Search element [5]. In the same year, Dahal et al [6] presented a GA using a binary representation. Further, the authors in [6] showed that an integer representation reduced the search space of the GA and also reduced the execution time of the algorithm. Finally, hybrid evolutionary techniques were investigated in [7] and [8]. Despite the performance improvement of some of the above methods they have some drawbacks. Specifically, they have some difficulties dealing with the complexity of some constraints and providing the needed flexibility to model development of actual real MSGU problems. In [9], the authors have shown that Constraint Programming (CP) is a good solution to overcome the flexibility problems of the above methods, while maintaining almost the same efficiency. In this paper, we present a hybrid constraint method, with the same flexibility of that of [9], but capable of competing in terms of efficiency with the best ones that have been presented. II. CONSTRAINT PROGRAMMING AND PROGRAMMING WITH CONSTRAINTS Combinatorial Search Problems, like the MSGU problem, within the context of CP framework must be formulated as Constraint Satisfaction Problems (CSP). A CSP is constituted by a tuple (X,D,C), where X is the set of the n problem variables, D is the set of their respective finite domains [D(X1), D(X2),…, D(Xn)], and C is the set of the problem constraints. A solution to the problem is a set of pairs variable/value [(X1,v1), (X2,v2),…, (Xn,vn)], being v a value of the X domain, which satisfies all the constraints. Considering this formulation, we can say that a framework to solve CSPs should provide tools to define the variables, the constraints and to maintain all the values consistent with the constraints during the search. One

2

framework that provides all of these tools is Constraint Programming. The structure of a CP Program is the following: 1. Define the variables 2. Define the constraints 3. Until all the variables are instantiated or all values tried do 3.1. Select a variable 3.2. Select a value 3.3. Instantiate the variable. If the instantiation fails backtrack to point 3.2 ant select another value 4. Return the solution if it exists

First, it is necessary to define the problem variables and their own domains, next the constraints over the variables. This constitutes the model of the problem. Then, it is necessary to instantiate the variables. In order to do that a variable must be chosen among the set of the non-instantiated variables, and then instantiated with a value of its domain. If the instantiation fails (e.g. a constraint is violated) the search backtracks and other values are tried. This constitutes the search procedure. During the search procedure, the consistency of all constraints must always be kept, requiring removing all the values of the variables domains that violate a constraint. This action is usually called Constraint Solving. In the present there are several systems that provide the tools to develop the problem models and maintain the constraints consistency efficiently. These systems are classified as Constraints Programming Systems if they are based on procedural languages, or Constraint Logic Programming Systems if they are based on natural languages. Some examples are ILOG Solver [10] for the first case, and ECLiPSe [11] or CHIP [12], for the second one. ECLiPSe, as others CP systems, provides a large set of predefined constraints, and also the tools to develop new ones. If the constraints are simple, CP techniques like Arc and Path Consistency [13] are used to solve them, otherwise if they are complex, several techniques from Graph Theory, Operations Research or applied Discrete Mathematics, among others, are used. These complex constraints are often called Global Constraints [12]. The large number of different areas where CP has been applied with success (see [14] for a survey) demonstrates the potential of this technology. III. GUIDED CONSTRAINT SEARCH The initial search space of a CSP is the Cartesian product of all the variables domains [D(X1)xD(X2)x…x D(Xn)]. The right constraints satisfaction algorithms can prune this search space removing some of the inconsistent values from it. Even then, and due to the computation associated with the constraints satisfaction algorithms, when the size of the search space is large, it can be impractical to search for the optimal solution. In order to overcome this drawback, we propose a new procedure. The idea is to use a fitness function to choose, in each iteration, only the z most promising values of each variable domain. Following, to use CP to solve the corresponding sub-problem and find a local optimum. If the fitness function is adequate it is possible that the local optimum is also the global one, and a lot of search time is saved. Naturally, if the fitness function is not adequate the results can be worse than if none division was made. The idea is illustrated in figure 1.

Figure 1 - Search Space division for two variables The figure 1 represents the search space of a problem with two variables (X1,X2), each one, with k domain values. Black points are feasible solutions. The problem search space size is k2. A adequate GCS fitness function would choose for the domain of each variable the values corresponding to the block A in the figure 1. Then, the first local optimum returned by the CP module would be also the global one. This would allow saving a lot of work exploring the rest of the search space. Naturally, an inadequate fitness function could choose the values corresponding to block B, then C, then D and finally A. This would be even worse, that if none division would be made. As we can see the fitness function is essential for the GCS algorithm efficiency. A. Subspace Definition The selection of the values that define the sub-space in each interaction is made using ideas from Guided Local Search [15]. Similarly to it, we define a utility (inutility) function, penalties and costs. However, we differ from it in the form we use these values. For each pair variable/value (Xi,vj) the inutility function returns a quantity that indicates if the corresponding value should be included, or not, in the variable domain in the next iteration. The inutility function is defined as follows:

I ij = C0 ij + Bc ij * p ij

(1)

Where I is the inutility value; C0 is an initial heuristic cost; Bc is the best solution cost that includes the pair and p is the penalty parameter. Initially p and Bc are initialized, for all pairs, respectively to 1 and to the problem upper bound cost. This means that, initially, the value of the inutility function depends only on C0. Consequently, C0 can be used to express some heuristics that give the possible best initial values. In the end of each iteration, the inutility function parameters of the used pairs are updated, depending if they belong to a new best solution, or not. Specifically, the penalty parameter is incremented by one unit for the pairs that do not belong to the best solution. For the other pairs, the penalty remains the same being the Bc parameter updated to the new best cost. Naturally, if none best solution is found all the pairs see their penalty increased. With this update procedure we accomplish two objectives. First, the probability of certain (possible best) pairs being chosen is progressively increased as they belong to good solutions (convergence of the search). Second, the search is diversified because the penalty of the pairs that do not belong to new best solutions is increased. Note that, as search evolutes, the weight of C0 becomes negligible. In fact, as the penalty parameter increases

3

the second term of the fitness function becomes much bigger than the first term. The key to the effectiveness of GCS is the equilibrium between penalizing “bad” pairs variable/values and do not penalizing “good” ones. B. GCS Algorithm Considering the above, we can define the GCS algorithm as: 1. Develop the problem’s model in terms of variables and constraints (as a CSP) 2. Initialize for all the pair’s variable/value the inutility function parameters (C0, Bc and p). 3. Repeat until a termination condition (e.g. a maximum number of iterations or time limit) is reached 3.1. Select for the domain of each problem’s variable the z values with the smaller inutility value (I) 3.2. Call the CP module to perform the search and return a new best solution (or not) 3.3. Update all the parameters, including the inutility value, for all the used pairs according to the returned solution 3.3.1. If the solution is a new best solution, update Bc for all the pairs that belongs to the solution and increment p for the other pairs 3.3.2. Else increment p for all the used pairs

Remembering example of Figure 1, the algorithm starts defining the problems variables (X1,X2) and constraints. Then, after initialize all the parameters for all the pairs, it calls the CP module to search the best solution, suppose, within the subspace B (Figure 1). A new best solution is found and the parameters are updated according. Based on the new inutilities, a new sub-space is defined, suppose A, and the CP module is called again. This time, the new best solution is also the global best solution. As we can see the algorithm is simple and can be used for solving any problem adequate for the CP framework. The algorithm is independent of the CP module, however, for large problem instances, the time the module needs to find the local optimum solution can be very large. This means that it is necessary to limit the time the CP module uses for searching, in order to avoid losses of time exploring bad search space areas. Regarding the above, the GCS algorithm has two parameters to be set, namely the number of domain values and the searching time of the CP module. IV. MSGU PROBLEM DESCRIPTION The MSGU problem is a combinatorial search problem and can be formulated as a CSP. The main problem variables are the ones that represent the starting time of each maintenance task, being their domains constituted by the possible starting periods. The set of problem constraints can be very large and complex, and it should express all the relevant restrictions of the problem. These restrictions can usually be summarized as follows: Resource Constraints – These constraints ensure that all the resources needed to perform the maintenance task are available during maintenance. Some of the resources are: maintenance teams, vehicles, replacement parts, etc. Time Constraints – These constraints impose, for each unit to be maintained, the valid maintenance periods. For

example, in winter, some units should not be maintained due to the possible bad weather conditions. Precedence Constraints – Some units, or group of units, should be maintained before/after other units, or groups. These constraints express that. Capacity Constraints – These constraints impose that the production capacity of each unit is never exceeded. Demand Constraints – These constraints ensure that the capacity of the units, that are operating, is sufficient to meet the global system demand. A real MSGU problem has more than one solution. If we consider an objective function (cost function) f, then there are one or more solutions for which the value of the function is maximal (or minimal for a minimization problem). In most of the works, the MSGU problem objective function is constituted by a sum of cost terms that should be minimized. These terms are usually the total Production Costs (PC) and total Maintenance Cost (MC) for the schedule horizon. PC represents, mainly, the fuel and operational cost, and are usually different for each unit and period (e.g. the fuel cost may vary during the year). MC associates all the maintenance costs, which are usually different for each unit and can be also different for each period (e.g. hydroelectric units are cheaper to maintain during periods of low water flow). Other objective functions can be used, see [16] for a literature review. V. PROBLEM CONSTRAINT MODEL Almost all the works refereed in section I. use the same mathematical formulation for the MSGU problem. In this paper we will use a constraint formulation for which we need to define the parameters, variables and constraints. A. Parameters and variables Parameters represent the problem data, and can be summarized as follows: T Number of periods in the schedule horizon U Number of units to be maintained Maximum number of units which can be maintained ct simultaneously in period t mit Maintenance cost of unit I in period t cit Production cost of unit i in period t ki Maximum power production capacity of unit i dt Power demand in period t ri Duration of maintenance task of unit I ait Availability for maintenance of unit i in period t Set of pairs of units which cannot be maintained simulI taneously Pr Set of pairs of that have precedence requirements The variables can be summarized as follows: Si Starting maintenance period of unit i Power production of unit i in period t (0 if the unit is in Pit maintenance) Mut Number of units in maintenance in period t Mci Total maintenance cost of unit i B. Constraints A valid maintenance schedule must meet the following con-

4

straints or domain requirements, which arise naturally from the problem definition: (2) ∀i ∈ U 1 ≤ Si ≤ T 0 ≤ Pit ≤ ki

∀i ∈ U, ∀t ∈ T

(3)

∑ Pit ≥ dt

∀t ∈ T

(4) (5)

0 ≤ Mut ≤ ct

∀i ∈ U, ∀t ∈ T ∀t ∈ T

S i + ri − 1 < S j ∨ S j + r j − 1 < S i

∀( i , j ) ∈ I

(7)

S i + ri − 1 < S j

∀( i , j ) ∈ Pr

(8)

ait = 0 ⇒ S i ≠ [t − ri + 1, t ]

(6)

Considering constraint classification from the previous section, we can say that (2) and (5) are time constraints. (2) ensures that the start period for each maintenance task must be one of the periods in the scheduling horizon. (5) ensures that a maintenance task is not executed in forbidden periods. Constraint (4) is a demand constraint and guarantees that the demand for each period is always fulfilled; (3) is a capacity constraint, and ensures that the maximum capacity of each unit is never exceeded. Finally, (6), (7) and (8) are resource constraints. (6) guarantees that the number of available resources for the period is not exceeded. (7) and (8) ensure, respectively, that two tasks are not scheduled simultaneously, and that one task is scheduled before the other. The ECLiPSe system provides global constraints that implement the resources constraints, and their names are cumulative, disjunctive and precedence respectively, see [17] for more details. The above constraints result directly from the problem definition, nevertheless, CP allows us to define other type of constraints in order to increase the search efficiency and maintain the variables domains synchronized. Due to a lack of space, we will not present in this paper all of our constraints of these types, but we will describe the important ones. Each time a maintenance task start period variable is instantiated a set of actions must be performed. These actions can be expressed by the synchronization constraints indicated in Table 1. TABLE 1 – SYNCHRONIZATION CONSTRAINTS

Constraints Si = t ⇒ Piu = 0 ∀u ∈ [t , t + ri − 1] Si = t ⇒ Muu = Muu − 1

∀u ∈ [t , t + ri − 1]

Si = t ⇒ Mci = ∑ miu

∀u ∈ [t , t + ri − 1]

u

Action During the maintenance of a unit it’s output is 0 For each maintenance period the resource availability should be decreased The total maintenance cost of a unit is equal to the sum of the costs for each of the maintenance periods

Besides the synchronization, there are also the so-called redundant constraints. These constraints are not needed to guarantee the solution validity, but they are a good form of increasing search efficiency. One example of one important redundant constraint is the following: Pit > 0 ⇒ S i ≠ [t − ri + 1, t ]

∀i ∈ U, ∀t ∈ T

(9)

This constraint simply says that, if the production of one unit for a specific period is different from 0, then in this period the unit cannot be maintained. This is clearly a redundant con-

straint, but let us suppose that by the action of (4) the domain of a certain Pit variable becomes different from 0. If (9) does not exist the search procedure only detects the inconsistency when it tries to instantiate Si with one of the values in the interval [t-ri+1,t]. With (9) the search space is pruned sooner, which can save a lot of unfruitful work. As we referred above, there is usually a large set of valid solutions. Due to this, we need to define an objective function that should be minimized or maximized. Our objective function is the following: ∑ ∑ cit * Pit + ∑ Mci (10) i

t

i

The objective of our search procedure is to minimize the above function. This means to minimize the total maintenance and production costs. VI. THE CP SEARCH PROCEDURE The search procedure is determinant for the CP module efficiency. In this work we have used two different search procedures. One results from the work in [9]. We named it as Branch and Bound procedure with Smallest variable selection heuristic and Smallest value selection heuristic (BBSS). We do not use the best procedure of [9], namely Credit Branch and Bound, because the idea of the credit mechanism is intrinsic to the GCS. The other results from the work in [18], and we called it as Branch and Bound procedure with Smallest Inutility variable and value selection heuristic (BBSI). The BBSS procedure uses an adapted version of the Branch and Bound method. This method starts by imposing a new constraint in the form of an upper cost bound to the objective function. Then, the search starts, and the variables are instantiated according to the variable and value selection heuristics. If a solution is found, the upper cost bound is tightened and the search for a better solution continues. If no solution is found, the procedure backtracks until a solution is found or no more alternatives exist. This search procedure is complete, as it finds the optimal solution or proves that none solution exists. The Smallest variable selection heuristic (SVA) and Smallest value selection heuristic (SVU) are used to select during the search, respectively, a non instantiated variable and a value of its domain. Basically, the SVA selects the maintenance cost variable (Mci) with the smallest value in the domain, and SVU selects the smallest value in the domain of the variable (smallest maintenance cost). The BBSI procedure uses the same Branch and Bound method as the BBSS but with Smallest Inutility selection heuristics. This heuristic selects first the variable that contains the pair with smallest inutility and then the value of that pair. The idea is that pairs with small inutility have more probability of belonging to good solutions. VII. EXPERIMENTAL RESULTS In order to evaluate the different search procedures and respective variants we have used data from the Portuguese Electric Power Generation Company. This data includes the general problem parameters as unit characteristics, demand profiles,

5

operational costs, maintenance cost, specific resource and time constraints, etc. However, as the data was limited in terms of problem size diversity (number of units and periods), we have developed an instance generator program. This program uses the real data to generate new problem instances with different sizes needed to compare our work with that of other authors (see [9] for more details). The method was implemented using ECLiPSe system, running on an AMD Celeron at 650Mhz, with 128Mb of memory, and with WIN98. Note that this method has not any stochastic component. Consequently, we only need to run the program once for each parameter set. A. Parameters Influence Prior to test the algorithm’s performance we need to define the number of values of the domain variables (z parameter) and the search time for the CP module (st parameter). Theoretically, z can vary from 1 to n. In practice, z values near 1, or near n usually imply bad algorithm’s performance. The reason for this is obvious. If z is near n the situation is equivalent to that of a pure CLP approach. In other words, the size of the sub-search space is almost equal to the size of the global one, so the effort that the constraint module must dispend in order to find the local optimum is almost the same it spends to find the global one. If z is near 1 the significance of the constraint module is almost null, so the overhead associated with it does not bring any advantage. The right z value is expected to be somewhere in the middle range. As refereed above, the st parameter limits CP module search time. In the beginning of each iteration (sub-search space exploring), a timer is initiated with the st parameter. Then, if none best solution is found until the timer reaches 0, the search stops, and a new sub-search space (new iteration) is explored, else, whenever a new best solution is found the timer is initiated. The idea is to spend more time exploring promising search areas. Note that, if st is too small then it is possible that the CP module does not have the time to find the optimal local solution, but if st is too high then the CP module spends a lot of time in each iteration exploring bad search space areas. TABLE 2 – MEAN CPU SEARCH TIME NEEDED TO OBTAIN THE OPTIMAL SOLUTION FOR SET S1 PROBEMS (U=6, T=12)

BBSI

BBSS

z parameter -> st=10 st=60 st =120 st =220 st=10 st =60 st =120 st =220

3

4

5

6

7

8

9

3 3 4 6 4 4 55 112

12 15 61 73 28 53 43 74

-52 67 87 26 31 14 26

16 15 16 23 21 26 72 116

-56 59 65 -90 200 498

-148 231 -393 423 756

-484 -311 432 698

BBSS BBSI

st=10 st=60 st =120 st =220 st=10 st =60 st =120 st =220

B. Comparison with other approaches In order to verify the efficiency of our search procedure we have used the results obtained in [7]. As far as we know, the work is a reference in the area of the maintenance scheduling, in their case, for thermal generators. The results presented in the paper report the application of several evolutionary techniques (e.g. Simulated Annealing, Tabu Search, Memetic Algorithms, etc) to 3 problems with different sizes. They also indicate the number, of what they call, combination and order constraints. These are, respectively, our constraints (7) and (8) defined in section IV. Table 4 summarizes problem parameters. The authors neither indicate the processor nor the programming language they use. TABLE 4 - TEST PROBLEM PARAMETERS Problem 1 2 3

TABLE 3 - MEAN CPU SEARCH TIME NEEDED TO OBTAIN THE OPTIMAL SOLUTION FOR SET S2 PROBEMS (U=10, T=24) z parameter>

In order to test the z and st parameters influence we have solved two sets (S1,S2) of 5 problems instances each, respectively with sizes (U=6,T=12) and (U=10,T=24). For each of the problems we have tried, for the z parameter, all the integer values in the interval [1,T]. We also have tried values for the st parameter varying from 1s to 300s in steps of 10 seconds. Table 2 – mean cpu search time needed to obtain the optimal solution for set s1 probems (U=6, T=12)Table 2 and Table 3 show the average CPU time (in seconds) that the BBSS and BBSI GLS search procedures needed to find the optimal solution for each problems set. When the symbol (--) appears, it means that the procedure could not found the optimal solution before the stop criterion was met (stop criterion considered 50 iterations). Due to a lack of space, we do not show in the table the results for all values of the st parameter. As we expected, for small and high z values (z9 for S1 and z 17 for S2) none of the search procedures could find the optimal solution. This problem was more perceptible for the BBSS procedure. It is also true that, as smaller are st values more difficult is to find the optimal solution for both procedures, and that, as higher are st values more time is needed to find the optimal solution. The right st value grows with the size of the problem. The BBSS procedure performs globally better (less computation time) than BBSI. However, the BBSI is less sensible to the parameters variation. This can be explained by the higher BBSS capacity to find the optimal local solution, but smaller capacity to direct the search for the global one. Based on these tests, we can say that a good z value is in the interval [ 1/4T,2/4T], and st>10+2*T seconds.

4

5

6

7

8

9

10

11

12

13

14

15

16

17

-257 457 698 -231 254 311

135 321 435 754 -487 564 987

-301 412 700 289 398 459 786

-237 457 558 194 245 435 547

-214 421 564 192 227 397 725

-55 61 32 98 22 245 298

-111 105 98 123 32 266 301

--106 112 -265 354 487

---------

--167 174 -234 321 425

---432 -238 332 439

-----331 411 547

-----896 712 3152

-----911 1104 2053

U 15 30 60

T 25 40 52

Constraints of type 6 2 3 5

Constraints of type 7 3 3 5

We have applied our 2 search procedures to 3 similar problems with the same size, number and type of constraints. The problems were created with our instance generator.

6

TABLE 5 - RESULTS Problem

1

2

3

SA

0:03 (218111)

0:18 (555280)

0:46 (1202264)

TS

0:10 (214066)

0:59 (555191)

3:08 (1201970)

MA(TS)

1:12 (209209)

6:34 (554344)

25:29 (1201279)

CBB+S+SV+Mc

1:40

8:22

48:26

BBSS

1:26 (z=8; st=70)

4:46 (z=13; st=90)

20:23 (z=15; st=110)

BBSI

1:34 (z=8; st=70)

5:12 (z=13; st=90)

18:12 (z=15; st=110)

Table 5 shows the results of our method, of CP pure approach of [9] named CBB+S+SV+Mc and 3 of the methods used in [7], namely Simulated Annealing (SA), Tabu Search (TS) and Memetic Algorithm with a Tabu Search Operator (MA(TS)). The used data is the same of [9] but different from [7]. We do not indicate the cost of our method neither of [9] because it has no interest for the comparison, regarding that data is different. Instead, we indicate between parentheses the value of the z and st parameter. Times shown are in minutes:seconds format and for the 3 stochastic methods, between parentheses, is indicated the average cost for 40 runs. Considering that problems with the same size and structure are equally demanding in terms of computation time we can say that the SA and TS algorithms are the faster ones, but with poor results. MA(TS) is slower than the others, but with results that the authors believe are near to the optimal. Our method performs better than the CBB+S+SV+Mc and, for problem 2 and 3, better than the MA(TS). We have verified that, for problems 1 and 2, the solutions were the optimal ones. Even for problem 3, we have not found any better solution when running our algorithms for more than 10 hours. In spite of not reporting the results in the table, we have tried another values for the z and st parameters and the results were similar. For example, with z values in an interval of + 2 units of the reported ones, and st varying +20%, we have accomplished the same costs with maximum variation of 30% in search time. This confirms the robustness of our method relatively to the parameters. Considering other applications, we can say that CP methods are usually slower than other methods, for example stochastic ones. Nevertheless, the quickness is not the strongest point of the CP, but the modeling capacities and flexibility are. With CP, planning engineers can spend more time studying the problem characteristics and consider side constraints that are often left aside, knowing that it will be easy to build the model, and that the solving problems are treated by the constraint system. GCS can maintain all these CP advantages while improves the search efficiency. VIII. CONCLUSIONS AND FUTURE WORK This paper presented a new hybrid constraint algorithm to solve the Maintenance Scheduling of Electric Power Units problem. The algorithm combines ideas of Guided local Search with Constraints based search resulting in what we have called Guided Constraint Search. Within the GCS two search procedures are described, namely the Branch and Bound procedure with Smallest variable selection heuristic and Smallest value selection heuristic and Branch and Bound procedure with Smallest Inutility variable and value selection heuristic. Based

on our test results, we concluded that the BBSS procedure performs slightly better than BBSI but is more sensible to the two algorithm parameters. In spite of this sensibility, the algorithm is robust enough to allow users to easily select both the parameters based on a simple relation. Finally, a comparison with other approaches suggests that our algorithm performs equal or even better than them. Summarizing, the paper demonstrates that it is possible to take advantage of the modeling capacities of the CP without losing efficiency when compared to other approaches less flexible like the stochastic ones. IX. REFERENCES [1] [2] [3] [4] [5] [6]

[7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

M. Weedy. Electric Power Systems, Chichester: John Wiley Sons, 1992. Dopazo J and Merrill H., Optimal generator maintenance scheduling using integer programming, IEEE Transactions on Power Apparatus and Systems, vol. 94, pp. 1537–15451975. Yamayee Z., Sidenblad K., and Yoshimura M., A computationally efficient optimal maintenance scheduling method, IEEE Transactions on Power Apparatus and Systems, vol. 102, pp. 330–3381983. Satoh Y. and Nara K., Maintenance scheduling by using the simulated annealing method, IEEE Tran on Power Systems, vol. 6, pp. 850, 1991. Kim H. and Nara K., An algorithm for thermal unit maintenance scheduling through combined use of GA, SA and TS, IEEE Transactions on Power Systems, vol. 12, pp. 329-335, 1997. Dahal K. and McDonald J., "Generator maintenance scheduling of electric power systems using genetic algorithms with integer representation," Proc. of 2nd International Conf. on Genetic Algorithms in Engineering Systems: Innovations and Applications (GALESIA'97), 1997. Burke E, Hybrid Evolutionary Techniques for the Maintenance Scheduling Problem, IEEE Tran. on Power Systems, vol. 15, pp. 122-128, 2000. Dahal P., Galloway S., Burt G and McDonald R. "Generation scheduling using genetic algorithm based hybrid techniques," Proc. of Large Engineering Systems Conference (LESCOPE2001), Canada, 2001. Gomes N. and Vale Z., "Constraint Based Maintenance Scheduling of Electric Power Units," submitted to PSPO'03, USA, 2003. Leconte M., Constraint Solving in ILOG and its Application, Escola Portuguesa em Inteligência Artificial, Lisboa, 1996. Wallace M., S. N. J. S., "ECLiPSe: A Platform for Constraint Logic Programming," Technical Report, IC Park, London, 1997. Beldiceanu N., Introducing global constraints in CHIP, Mathematical and Computer Modelling , vol. 20, pp. 97-123, 1994. Mohr R. and Henderson T, Arc and Path Consistency Revisited Artificial Intelligence, vol. 28, pp. 225-233, 1986 Rossi Francesca et al., "Constraint Logic Programming," ERCIM/Compulog Net workshop on constraints, 2000. Voudouris C.; Tsang, E.P.K., " Guided Local Search", European Journal of Operational Research, vol.113, 2, pp. 469-499, 1999. El-Sharkh M., Yasser R., and El-Keib A., "Optimal Maintenance Scheduling for Power Generation Systems - A Literature Review," Proc. of the Maintenance and Realiability Conference 1998, pp. 20.01-20.10, 1998. W. Harvey & all, "The ECLiPSe Library Manual Re-lease 5.4.," ICPARK, 2002. Gomes N. and Vale Z., "Guided Constraint Search", technical report ISEP, IS2343, 2001.