Performance Measures for Dynamic Multi ... - Semantic Scholar

3 downloads 104 Views 169KB Size Report
Mario Cámara1, Julio Ortega1, and Francisco de Toro2. 1. Dept. of ... 1 Introduction. Dynamic optimization problems appear in many different real-world applications ..... Morgan Kaufmann Publishers Inc., San Francisco (2002). 9. Zitzler, E.
Performance Measures for Dynamic Multi-Objective Optimization Mario C´ amara1 , Julio Ortega1 , and Francisco de Toro2 1

2

Dept. of Computer Technology and Architecture Dept. of Signal Theory, Telematics and Communications University of Granada, Spain {mcamara,jortega}@atc.ugr.es, [email protected]

Abstract. As most of the performance measures proposed for dynamic optimization algorithms in the literature are only for single objective problems, we propose new measures for dynamic multi-objective problems. Specifically, we give new measures for those problems in which the Pareto fronts are unknown. As these problems are the most common in the industry, our proposed measures constitute an important contribution in order to promote further research on these problems.

1

Introduction

Dynamic optimization problems appear in many different real-world applications with high socio-economic relevance. They are dynamic because there are changes over time in the conditions on which the cost functions depend, in the restrictions that the solutions must meet, etc. For example, in some scheduling problems, such as those appearing in parallel computing servers, the available resources and the volume of tasks to be allocated could vary over time. The shifting of optima with time is an important issue in this kind of problems. Dynamic multi-objective optimization (DMO) is comprised of both multiobjective problems, where there are more than one objective function to optimise, and non-stationary or dynamic problems, where the restrictions or the objective functions can change over time [1]. In the last years, a growing interest from researchers has focused on solving dynamic problems by using evolutionary algorithms [2,3,4]. In order to be able to assess algorithms by their performance, some measures were given over the last years [2,3,4,5,6,7]. However, most of them were aimed either at dynamic problems having only one objective or at problems where the location of the real Pareto fronts is known beforehand. In this paper, we present some measures designed for dynamic multi-objective problems. Our measures tackle the difficulty that arise when the location of the real Pareto fronts is unknown, as it happens with most real world DMO problems. In section 2 a review of the previous work done in dynamic optimization is given. Then, section 3 presents the measures that we propose. Section 4 offers some results using these measures. Finally, conclusions are given in section 5. J. Cabestany et al. (Eds.): IWANN 2009, Part I, LNCS 5517, pp. 760–767, 2009. c Springer-Verlag Berlin Heidelberg 2009 

Performance Measures for Dynamic Multi-Objective Optimization

2

761

Previous Work

A very important issue when developing algorithms is to have a suite of performance measures to compare the different algorithms. Despite of the fact that in non-stationary multi-objective optimization is not feasible to have a definitive set of such performance measures [8], some quality indicators have been created with the same goal [9]. However, in addition to performance indicators like those for stationary multiobjective algorithms, dynamic problem optimizers need another type of performance measures. They should point out which algorithm from the available ones best suits the current needs, and if it would be able to deal with the problem at hand on time. This preliminary study should take place before the commercial exploitation of the system begins. Thus, it allows the designer to use offline measures instead of on-line or on-the-fly measures. Some of the few existing performance measures on this topic are considered in what follows. Morrison [6] introduces a new measure known as Collective Mean Fitness FC , which is a single value that is designed to provide an aggregate picture of an EAs performance, where the performance information has been collected over a representative sample of the fitness landscape dynamics. Collective fitness is defined as the mean best-of-generation values, averaged over a sufficient number of generations, G , required to expose the evolutionary algorithm to a representative sample of all possible landscape dynamics, further averaged over multiple runs. Weicker [5] proposes measures for what he describes as the three different aspects that have to be taken into account when analyzing and comparing dynamic problems algorithms. The first is the accuracy measure which should measure the closeness of the current best found solution to the actual best solution. It usually takes values between 0 and 1, being 1 the best accuracy value. Accuracy is defined as: (t) (t) F (bestEA ) − minF (t) (1) accuracyF,EA = (t) (t) maxF − minF (t)

where bestEA is the best found solution in the population at time t. The maxi(t) mum and minimum fitness values in the search space are represented by maxF (t) and minF respectively. F is the fitness function of the problem. Weicker also stated that stability is an important issue in the context of dynamic optimization. A dynamic algorithm is called stable if changes in the environment do not affect the optimization accuracy severely. Hence, a definition of stability was given as:   (t) (t) (t−1) (2) stabF,EA = max 0, accuracyF,EA − accuracyF,EA and takes values from 0 to 1. In this case, a value close to 0 means high stability. A third aspect of interest in dynamic problems is the ability of an algorithm to react to changes. Weicker proposes to check that an algorithm has ε-reactivity at time t using the next equation:

762

M. C´ amara, J. Ortega, and F. de Toro

(t)

reactF,A,ε = min

⎧ ⎧ ⎪ ⎪ ⎨⎨ ⎩ ⎪ ⎪ ⎩

(t )

t − t|t < t ≤ maxgen, t ∈ IN,

∪ {maxgen − t}

accuracyF,EA (t)

accuracyF,EA

⎫ ⎬ ≥ (1 − ε) ⎭

⎫ ⎪ ⎪ ⎬ ⎪ ⎪ ⎭ (3) (t)

where maxgen is the number of generations. The measure reactF,A,ε evaluates how much time Δt it takes the algorithm to achieve a desired accuracy threshold. The problem shown by all the measures proposed until now is that they were designed only for dynamic single objective problems and not for multi-objective optimization problems. In [3], we proposed an adaptation of Weicker’s measures for the multi-objective problems. Here, we improve and extend those measures. On the other hand, Li et al [2] also proposed some measures for DMO. The first one is called rGD(t), which is based in the GD measure [10]. The drawback of this measure is that the real Pareto front must be known at any time. Nevertheless, it could be a very good substitute of Weicker’s accuracy for multiobjective problems. They also propose the use of the hypervolume ratio 1 which is identical to our accuracy proposed at the same time [3]. Finally, they offer an adaptation of the collective mean error based on rGD(t).

3

Improved Measures for Multi-Objective Algorithms

When taking into account measures for dynamic multi-objective problems it is important to distinguish between those problems in which the current real Pareto front is known at every time of the run of the algorithm and those in which the real Pareto front is rather unknown. The latter is the usual case in real world problems, and even in many of the suggested test cases for researching purposes. All of the already indicated measures that have been previously proposed are either only for single objective problems or for those multi-objective problems in which the real Pareto fronts Freal (t) are known at any time. In the rest of the paper, we will use the following terms and notation to designate the different aspects of each problem, which is strongly influenced from that found in [1]. The terms decision space and objective space refer, respectively, to the spaces of the search variables and of the objective functions of that search variables. Also, we call the set of non-dominated solutions found at time t as approximate Pareto optimal solutions at time t, which is divided into the approximate decision space at 1

Hypervolume is a measure defined, by Zitzler et al [9], as the size of the space covered by a given set of solutions from a given reference point. It is similar to the concept of a discrete integral.

Performance Measures for Dynamic Multi-Objective Optimization

763

time t, SP (t), and the approximate objective space or approximate Pareto front at time t, FP (t). The real Pareto front, that can be known or not at time t, is denoted as Freal (t) and is always in the objective space. In order to deal with real problems where the location of the real Pareto fronts is unknown we propose to redefine the measures previously proposed that rely on the knowledge of the real Pareto fronts. Nevertheless, the only measure that has to be adapted is acc(t) (1) as the other two, stb(t) (4) and reacε (t) (5), rely on the knowledge of the real Pareto fronts only through acc(t) (1). For completeness purposes we reproduce our definitions for stb(t) (4) and reacε (t) (5): stb0 (t) if stb0 (t) ≥ 0 stb(t) = ; with stb0 (t) = acc(t) − acc(t − 1) (4) 1 otherwise reacε (t) = 

acc(t )    ≥ (1 − ε) ∪ {maxgen − t} min t − t|t < t ≤ maxgen, t ∈ N, acc(t) (5) Nevertheless, we note again that as we are interested in offline measures we can exploit the knowledge of all the approximate Pareto fronts, not just the past ones. Hence, in order to improve our accuracy definition for those cases in which the real Pareto fronts are unknown, we need to replace the hypervolume of the fronts by other suitable quantities. These quantities could be the maximum and minimum hypervolumes over all the time. However, if the Pareto fronts of the objective space of the problem change, those absolute maximum and minimum measures could be far from the current real Pareto front. Because of this, we propose to adapt the concept of accuracy, and consider it as a measure of the current approximate Pareto front in comparison only with the nearby approximate fronts, both in the past and in the future. This is the concept of accuracy within a window or offset which was already mentioned by Weicker [5]. A window is a period of time in which the problem becomes stationary: the problem does not show clear changes in the approximate Pareto fronts that have been found for those times. Thus, windows mark phases of the problem. Each phase is characterized by the moment in which the change has happened (when the phase starts) and its length in time units. We suggest that the window length should be variable instead of fixed. If the problem under study changes with an exact frequency, this window length should turn out to be equal to the inverse of the frequency. But in order to widen the set of problems that the new measure could analyze, it must be able to cope with variable frequencies. Therefore, our improved measure has two parts. First, the windows or phases are detected and the lengths corresponding to each phase are calculated. Then, the accuracy measure is calculated for every time step using the relative minimal or maximal hypervolume values within that phase. To calculate the lengths of all the phases we propose the Procedure 1.

764

M. C´ amara, J. Ortega, and F. de Toro

Procedure 1. Calculation of lengths Input: A set of N hypervolume values for the approximate Pareto fronts. Output: A set S of the lengths of each of the phases. 1 2 3 4 5 6 7 8

begin for i = 2 to N do ΔHVi = HVi − HVi−1 length = 1 for i = 2 to N do if ΔHVi ≥ |ΔHVi−1 + ΔHVi+1 | then S ←− S ∪ length length = 1

9

else length = length + 1

10 11

end

The conditions inside the if in line 6 of Procedure 1 can be changed to other conditions which may be suitable to detect a change in the fronts. Once the lengths have been obtained the accuracy for every approximate Pareto front is calculated by using (6), which makes use of (7) and (8). In them, HV is the hypervolume of the given set of solutions. (t) if the problem is maximizing accmaximizing unk accunk (t) = (6) minimizing (t) if the problem is minimizing accunk Q(t)

HV (Fp (min{Q(t)})) V HVmin (Q(t)) = = min HV (Fp (t)) HV (Fp (t)) V (t) HV (Fp (t)) V (t) HV (Fp (t)) accminimizing = = Q(t) (t) = unk HVmax (Q(t)) HV (Fp (max{Q(t)})) Vmax (t) = accmaximizing unk

(7) (8)

where Q(t) is a set containing the time values for the window in which t takes place, i.e., the surrounding past and future values of t in which the approximate Pareto fronts have not suffered noticeable changes in their hypervolume measures, according to Procedure 1. The cardinality of each Q(t) is the length Q(t) Q(t) of the phase that Q(t) represents. Thus, Vmax and Vmin are, respectively, the maximum and minimum hypervolume of the fronts within the phase defined by Q(t).

4

Results

In this section we show how these measures work on a set of approximate Pareto fronts corresponding to the problem FDA3-mod [11]. This problem is a modification we proposed for FDA3 [1] in order to avoid some underlying problems

Performance Measures for Dynamic Multi-Objective Optimization

765

in it. In this function, every 5 time steps the solutions are forced to change in the decision and objective spaces, through a controlling parameter τi which is constant over the run of the algorithm. Due to space limitations only the first 21 time steps are shown. The problem was solved with our SFGA algorithm [12], a generational MOEA which keeps in the population only the first dominating rank of the non-dominated solutions. In spite of the fact that the lengths of the problem phases were calculated using the Procedure 1, they are not shown in the table because they equal 5 after the changes were forced in the problem, as expected, because of the existence of the parameter τi = 5. Table 1 shows the measures for the found Pareto front for each time step, and the difference in hypervolume between a found Pareto front and the previous one. Table 1. Performance Measures for FDA3-mod Time t 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

Hypervolume Absolute Change Accuracy HV (t) |ΔHV (t)| acc(t) 3,7213 0,984 3,7562 0,035 0,993 3,7734 0,017 0,998 3,7780 0,005 0,999 3,7818 0,004 1,000 3,6845 0,097 0,989 3,7046 0,020 0,995 3,7141 0,010 0,998 3,7177 0,004 0,998 3,7221 0,004 1,000 3,6175 0,105 0,986 3,6482 0,031 0,995 3,6571 0,009 0,997 3,6627 0,006 0,999 3,6681 0,005 1,000 3,5648 0,103 0,986 3,5965 0,032 0,995 3,6094 0,013 0,998 3,6143 0,005 0,999 3,6162 0,002 1,000 3,5339 0,082 0,999

Measures Stability Reaction time stb(t) reac(t) 1 0,009 1 0,005 1 0,001 1 0,001 1 1,000 5 0,005 1 0,003 1 0,001 1 0,001 1 1,000 5 0,008 1 0,002 1 0,002 1 0,001 1 1,000 5 0,009 1 0,004 1 0,001 1 0,001 1 1,000 5

The importance of the measures presented in this paper is twofold. First, all the data shown in Table 1 has been gathered and calculated without reckoning on the location of the real Pareto fronts. This feature widens immensely the range of problems which could be tackled with these measures. On the other hand, they are the first measures for dynamic and multi-objective problems which specifically address the three different but important aspects of dynamic

766

M. C´ amara, J. Ortega, and F. de Toro

problems as suggested by Weicker [5]: the accuracy, the stability and the reaction time. From Table 1 it is clear that the measures proposed by Weicker and adapted in this paper to multi-objective problems are useful when tackling a dynamic problem like FDA3-mod. The reason for that is that Procedure 1 was able to detect all the phases that happened in the problem and exactly in the actual time step in which they appeared. Also, very suitable accuracy measures for every approximate Pareto front were found. That was possible because the accuracy was a measure related exclusively to the relative minimum or maximum approximate Pareto front appearing in that phase. For instance, if we take the approximate Pareto fronts found at times 1 and 16 given in Table 1, they have almost the same accuracy (around 0,985), but their hypervolumes values differ more than in 4%. However, if absolute minimum and maximum hypervolumes had been used in accunk (t), that differences in the hypervolume values would have been translated into the accuracy measure, yielding that the first approximate Pareto front would be better than the sixteenth in terms of accuracy. This turns out to be incorrect. The true is that the first approximate Pareto front can be worse or better but only in comparison to those other approximate fronts which belong to the first phase of the problem, and the same happens with the sixteenth approximate Pareto front, but in this case, in the fourth phase. Finally, these proposed measures can be used to make a choice on which algorithm could fit better for a specific application. In that case, tests are done using different algorithms and the accuracy measure is used to choose which one of the tested algorithms shows the best results in optimization terms. Then, the other two measures are very useful in order to make sure that the chosen algorithm is able to cope on time with the given problem.

5

Conclusions

In this paper we have proposed performance measures for dynamic multiobjective optimization in order to address problems where the location of the real Pareto fronts is unknown. DMO is a hot topic and so this contribution may turn out to be quite useful. We expect that researchers will take advantage of the proposed performance measures for their research. Precisely, our current and future work on parallel evolutionary algorithms for dynamic multi-objective optimization, some of which first results are shown in [11], will take advantage of these measures. Acknowledgement. Work funded by the projects TIN2007-60587 (Spanish Ministry of Science and Technology) and P07-TIC-02988 (Junta de Andaluc´ıa), and by the programme for early stage researchers by the Andalusian government, co-financed with FEDER funds.

Performance Measures for Dynamic Multi-Objective Optimization

767

References 1. Farina, M., Deb, K., Amato, P.: Dynamic multiobjective optimization problems: Test cases, approximations, and applications. IEEE Trans. on Evolutionary Computation 8, 425–442 (2004) 2. Li, X., Branke, J., Kirley, M.: On performance metrics and particle swarm methods for dynamic multiobjective optimization problems. In: IEEE Congress on Evolutionary Computation, pp. 576–583 (2007) 3. C´ amara, M., Ortega, J., de Toro, F.J.: Parallel processing for multi-objective optimization in dynamic environments. In: Proceedings Of The 21st International Parallel And Distributed Processing Symposium, IPDPS 2007 (2007) 4. Alba, E., Saucedo, J.F., Luque, G.: 13. In: A Study of Canonical GAs for NSOPs. Panmictic versus Decentralized Genetic Algorithms for Non-Stationary Problems, pp. 246–260. Springer, Heidelberg (2007) 5. Weicker, K.: Performance measures for dynamic environments. In: Guerv´ os, J.J.M., Adamidis, P.A., Beyer, H.-G., Fern´ andez-Villaca˜ nas, J.-L., Schwefel, H.-P. (eds.) PPSN 2002. LNCS, vol. 2439, pp. 64–76. Springer, Heidelberg (2002) 6. Morrison, R.: Performance measurement in dynamic environments. In: Branke, J. (ed.) GECCO Workshop on Evolutionary Algorithms for Dynamic Optimization Problems, pp. 5–8 (2003) 7. Hatzakis, I., Wallace, D.: Dynamic multi-objective optimization with evolutionary algorithms: a forward-looking approach. In: GECCO 2006: Proceedings of the 8th annual conference on Genetic and evolutionary computation, pp. 1201–1208. ACM, New York (2006) 8. Zitzler, E., Laumanns, M., Thiele, L., Fonseca, C.M., da Fonseca, V.G.: Why quality assessment of multiobjective optimizers is difficult. In: GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, pp. 666–674. Morgan Kaufmann Publishers Inc., San Francisco (2002) 9. Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C.M., Grunert da Fonseca, V.: Performance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE Transactions on Evolutionary Computation 7(2), 117–132 (2003) 10. Van Veldhuizen, D.A.: Multiobjective Evolutionary Algorithms: Classifications, Analyses, and New Innovations. Ph.D thesis, Wright-Patterson AFB, OH (1999) 11. C´ amara, M., Ortega, J., de Toro, F.: Parallel multi-objective optimization evolutionary algorithms in dynamic environments. In: Lanchares, J., Fern´ andez, F., Risco-Mart´ın, J.L. (eds.) Proceedings of The First International Workshop On Parallel Architectures and Bioinspired Algorithms, vol. 1, pp. 13–20 (2008) 12. de Toro, F., Ortega, J., Ros, E., Mota, S., Paechter, B., Martn, J.M.: PSFGA: Parallel processing and evolutionary computation for multiobjective optimisation. Parallel Computing 30, 721–739 (2004)