Evolutionary algorithm for stochastic job shop scheduling with random ...

4 downloads 49284 Views 484KB Size Report
Many ap- proaches are used in dealing with JSSP, such as analytical tech- niques ... not be the best way reflecting the stochastic nature of processing time. Gholami ..... inherit segments of information stored in parent individuals. The offspring ...
Expert Systems with Applications 39 (2012) 3603–3610

Contents lists available at SciVerse ScienceDirect

Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Evolutionary algorithm for stochastic job shop scheduling with random processing time q Shih-Cheng Horng a, Shieh-Shing Lin b, Feng-Yi Yang c,⇑ a

Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taiwan, ROC Department of Electrical Engineering, St. John’s University, Taiwan, ROC c Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taiwan, ROC b

a r t i c l e

i n f o

Keywords: Stochastic job shop scheduling Evolutionary strategy Ordinal optimization Simulation optimization Dispatching rule

a b s t r a c t In this paper, an evolutionary algorithm of embedding evolutionary strategy (ES) in ordinal optimization (OO), abbreviated as ESOO, is proposed to solve for a good enough schedule of stochastic job shop scheduling problem (SJSSP) with the objective of minimizing the expected sum of storage expenses and tardiness penalties using limited computation time. First, a rough model using stochastic simulation with short simulation length will be used as a fitness approximation in ES to select N roughly good schedules from search space. Next, starting from the selected N roughly good schedules we proceed with goal softening procedure to search for a good enough schedule. Finally, the proposed ESOO algorithm is applied to a SJSSP comprising 8 jobs on 8 machines with random processing time in truncated normal, uniform, and exponential distributions. The simulation test results obtained by the proposed approach were compared with five typical dispatching rules, and the results demonstrated that the obtaining good enough schedule is successful in the aspects of solution quality and computational efficiency. Ó 2011 Elsevier Ltd. All rights reserved.

1. Introduction In the last two decades, a great deal of research works has been conducted on job shop scheduling problem (JSSP). Many approaches are used in dealing with JSSP, such as analytical techniques, rule-based approaches and meta-heuristic algorithms (Chakraborty, 2009). Analytical techniques can get the optimal or near-optimal solution, but they cannot describe the stochastic characteristic of the manufacture system (Fattahi, Mehrabad, & Jolai, 2007). Rule-based approaches are widely used in practice because of their simplicity and effectiveness (Sule, 2007). However, there is no single rule that is the best under all possible conditions. Meta-heuristic algorithms can get the near-optimal solution, but it consumes much computation time when the scheduling problems become complex (Zhang & Wu, 2010). Most of the job shop scheduling problems are the stochastic scheduling ones in real manufacture system. Stochastic job shop scheduling problem (SJSSP) is a NP-hard problem, which reflects the real-world situations and tends to suffer from its uncertain feature. The literature of the stochastic job shop scheduling is notably sparser than the literature of the deterministic job shop scheduling. q This work was partially supported by National Science Council in Taiwan, ROC under Grant NSC100-2221-E-324-006. ⇑ Corresponding author. Address: No. 155, Sec. 2, Li-Nong St., Taipei 11221, Taiwan, ROC. Tel.: +886 2 28267281; fax: +886 2 28201095. E-mail addresses: [email protected] (S.-C. Horng), [email protected] (S.-S. Lin), [email protected] (F.-Y. Yang).

0957-4174/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2011.09.050

Research about scheduling problems focusing on random processing time is one of the newest issues that have been interested recently with regard to the need of flexible manufacturing system. A scheduling method based on variable neighborhood search (VNS) was introduced in (Zandieh & Adibi, 2010) to address a dynamic JSSP that considers random job arrivals and machine breakdowns. Renna created the schedule by a pheromone-based approach and carried out by a multi-agent architecture (Renna, 2010). Zhou et al. proposed an ant colony optimization algorithm (ACO) with three levels of machine utilizations, three different processing time distributions, and three different performance measures for solving dynamic JSSP (Zhou, Nee, & Lee, 2009). A hybrid method that utilizes a neural network approach is developed in (Tavakkoli-Moghaddam, Jolai, Vaziri, Ahmed, & Azaron, 2005) to generate initial feasible solutions, then a simulated annealing (SA) algorithm is applied to improve the quality and performance of initial solutions to obtain a near-optimal solution. In Lei (2010), an efficient decompositionintegration genetic algorithm is developed to minimize the maximum fuzzy completion time. Gu et al. proposed a competitive co-evolutionary quantum genetic algorithm (GA) for solving SJSSP with the objective to minimize the expected makespan (Gu, Gu, Cao, & Gu, 2010), but negative or zero processing time comes from the normal distribution as a probability distribution function may not be the best way reflecting the stochastic nature of processing time. Gholami and Zandieh integrated simulation into genetic algorithm to the dynamic scheduling of a flexible job shop with the objective to minimize the expected makespan and expected mean

3604

S.-C. Horng et al. / Expert Systems with Applications 39 (2012) 3603–3610

tardiness (Gholami & Zandieh, 2009). In Gu, Gu, and Gu (2009), a parallel quantum genetic algorithm is proposed for SJSSP with the objective of minimizing the expected makespan, where the processing times are subjected to independent normal distributions. All the above methods suffer from lengthy computation times, because evaluating the objective of a schedule is already very time-consuming not even mention the extremely slow convergence of the heuristic techniques in searching through a huge search space. To overcome the drawback of consuming much computation time for complex scheduling problems, we propose an evolutionary algorithm of embedding the evolutionary strategy (ES) in ordinal optimization (OO), abbreviated as ESOO, to solve for a good enough feasible schedule within a reasonable amount of time. The key idea of ESOO algorithm is to narrow the search space step by step or gradually restrict the search through iterative use of ordinal optimization (Ho, Zhao, & Jia, 2007; Shen, Ho, & Zhao, 2009). It is in the spirit of traditional hill climbing except instead of moving from one point to point in the search space, ESOO algorithm moves from one subset to another. The SJSSP is formulated as a constrained stochastic simulation optimization problem that possesses a huge search space and is most suitable for demonstrating the validity of the proposed ESOO algorithm. To overcome the drawback of negative or zero processing time resulting from probability distribution function, we consider a SJSSP with stochastic processing time in truncated normal, uniform, and exponential distributions. Because the use of both earliness and tardiness penalties gives rise to a nonregular performance measure, storage expenses of earliness and tardiness penalties will be served as the objective function of our optimization problem. Therefore, the purpose of this paper is to determine a good enough schedule with the objective of minimizing expected sum of storage expenses and tardiness penalties using limited computation time. The developed mathematical formulation is the first contribution of this paper, which can be used for job shop manufacturing systems with random processing time and various cost penalties and expenses. The proposed ESOO algorithm is another contribution of this paper, which consists of exploration and exploitation stage. The exploration stage uses ES to efficiently select N excellent solutions from the search space, where the fitness function is evaluated with a rough model. To ensure generating of feasible schedules, a precedence-based permutation encoding and a repair operator are utilized in the processes of ES. The exploitation stage composes of multiple phases, which allocate the computing resource and budget by iteratively and adaptively selecting the candidate solution set. At each phase, remaining solutions are evaluated and some of them are eliminated, and the solution obtained in the last phase is the good enough schedule that we seek. The rest of this paper is organized as follows. In Section 2, we present a mathematical formulation of a general SJSSP and describe the difficulty of the problem. In Section 3, we illustrate the proposed ESOO algorithm for finding a good enough schedule from the search space. In Section 4, the proposed algorithm is applied to a SJSSP comprising 8 jobs on 8 machines with random processing time in truncated normal, uniform, and exponential distributions. We show the test results of applying the proposed approach and demonstrate the solution quality by comparing with those obtained by five typical dispatching rules. We also provide the performance analysis of our approach to justify the global goodness of the obtained solution. Finally, we make a conclusion in Section 5.

2. Stochastic job shop scheduling problem 2.1. Problem statement We consider a problem of machine scheduling known as the general SJSSP. A set of n jobs is given, denoted as Ji, 1 6 i 6 n. Every

job consists of a sequence of operations, each of which needs to be processed during an uninterrupted time period on a given machine. A set of m machines is given, denoted as Mk, 1 6 k 6 m. Each machine can process at most one operation at a time. The operation of job Ji processed on machine Mk is denoted as Oi,k, 1 6 i 6 n, 1 6 k 6 m. We denote the index of processing sequence of Oi,k by ai,k, which is given in advance. The random processing time of Oi,k is denoted by pi,k, which is a random variable following a given probability distribution function with mean hi,k and variance r2i;k . For the sake of simplicity in expression, an operation environment vector, denoted as ½ai;k ; hi;k ; r2i;k , is used to describe the job shop environment. The due dates, denoted as di, for each job Ji are deterministic. If one or more jobs are ready to be processed on a certain machine and that machine is free, a job will be chosen and passed on the machine immediately. As a result of scheduling decisions, job Ji will be assigned a completion time Ci, where Ci is the completion time of the last operation. Let Ti = max {0, Ci  di} and Ai = max {0, di  Ci} represent the tardiness and earliness of job Ji, respectively. A schedule specifies the order in which operations are performed on machine and can be represented by a random permutation of all Oi,k, 1 6 i 6 n, 1 6 k 6 m. Let H denote the set of feasible and infeasible schedules without idle times between jobs. A feasible schedule S should satisfy the precedence constraints due to the order in which the operations need to be done for each of the jobs and also the sequencing constraints to ensure at most one job is performed on a machine at a given time. The goal of SJSSP is to find a feasible schedule S e H that minimizes specified objectives and respects the following conditions: (i) each machine can process at most one operation a time; (ii) each job can be performed only on one machine at a time; (iii) there are no precedence constraints among operations of different jobs; (iv) operations cannot be preempted once started. 2.2. Mathematical formulation In this section, a mathematical model of the general job shop manufacturing system in stochastic environments is presented. If a job Ji is accomplished later than the due date di, a cost penalty ai for each time unit of the delay is paid to the customer. If the job is accomplished before the due date, the customer does not accept it until the deadline. Thus, the system is compelled to store the job until the due date and to spend bi per time unit of storage. Taking the economic situation into account, such model covers a broad spectrum of a job shop manufacturing system under random disturbances. The SJSSP for a given operation environment vector ½ai;k ; hi;k ; r2i;k  can be formulated as

( min E f ðSÞ ¼ S2H

n X

)

ai T i þ bi Ai

ð1Þ

i¼1

subject to Oi;j satisfy ai;k processing sequence; 8Oi;j 2 S:

ð2Þ

where f(S) denotes the sum of tardiness penalties and storage expenses; the parameter ai > 0 denotes weight of tardiness penalty of job Ji (penalty imposed by tardiness), then aiTi denotes tardiness penalties of job Ji; the parameter bi > 0 denotes the weight of earliness penalty of job Ji (penalty imposed by earliness), then biAi denotes storage expenses of job Ji. Decision maker can determine the values of tardiness weight and earliness weight based on the economic situation. In the case of high machine utilization, the storage cost is small relative to the cost of unused production capacity. However, for low machine utilization, deliberate delay of finishing the job may be more economic. Therefore, the expected sum of tardiness penalties and storage expenses, E[f(S)], will serve as the objective function of our optimization problem. Precedence

S.-C. Horng et al. / Expert Systems with Applications 39 (2012) 3603–3610

constraints (2) ensure that the processing sequence of operations in each job corresponds to the predetermined order. Any feasible solution satisfies constraints (2) is called a feasible schedule. The objective of this optimization problem is to find a feasible schedule S that minimizes the expected sum of storage expenses and tardiness penalties from search space H while satisfying precedence constraints. Apparently the above optimization problem is a constrained stochastic simulation optimization problem with huge search space H. Since simulation can effectively describe the characteristic of operations, a simulation methodology can be used to evaluate the objective value. The principle of simulation method is that the analytical objective function is replaced by simulation models. However, to evaluate the true objective value of a feasible schedule S e H, a stochastic simulation of infinite replications is needed to perform. Although infinite replications of simulation will make the objective value of (1) stable, in fact, this is practically impossible. Therefore, depending on the number of replications, (1) can be reformulated as follows:

( ) L n X 1X l l min FðSÞ ¼ ½ai T i ðSÞ þ bi Ai ðSÞ ; S2H L l¼1 i¼1

ð3Þ

where L denotes the simulation length, i.e. number of replications, T li ðSÞ and Ali ðSÞ denote the tardiness and earliness of job Ji on the lth replication of a feasible schedule S, respectively. The sample mean of objective value, F(S), is the average sum of tardiness penalties and storage expenses of a feasible schedule S when the simulation length is L. Thus, sufficiently large L will make the sample mean of objective value, F(S), sufficiently stable. Let Le = 105 represent the sufficiently large L. In the sequel, the exact model of (3) is defined as when the simulation length L = Le. For the sake of simplicity in expression, let Fe(S) denote the sample mean of objective value for a feasible schedule S computed by exact model. 2.3. Difficulty of the problem Since the job’s processing time fluctuates stochastically at each machine of SJSSP, so the process sequences of jobs on each machine cannot be predefined. Even if the sequence has been predefined, it would become invalidation due to the change of the job’s actual process time. Analytical techniques can only be applied to specific cases in job shop scheduling problems with random processing time (Fattahi et al., 2007). Simulation optimization has a powerful modeling and optimization capabilities, so it suited to solve this constrained stochastic simulation optimization problem (Keskin, Melouk, & Meyer, 2010; Li, Sava, & Xie, 2009). Due to computational intractability of objective value, it is unrealistic to optimally solve a SJSSP in reasonable time. Recently, several heuristic search methods including the ant colony optimization (ACO) method (Zhou et al., 2009), the simulated annealing (SA) method (Tavakkoli-Moghaddam et al., 2005), and the genetic algorithm (GA) (Gholami & Zandieh, 2009; Gu et al., 2009, 2010; Lei, 2010) have been successfully applied to SJSSP. Despite the success of several applications of the above heuristic methods, many technical hurdles and barriers to broader application remain as indicated in Dréo, Pétrowski, Siarry, and Taillard (2006). Chief among them is speed, because using the simulation to evaluate the objective value for a given schedule is already computationally expensive not even mention the search of the best schedule provided that the search space is huge. The difficulty of SJSSP makes it very hard for conventional search-based methods to find near-optimal schedule in reasonable time. Furthermore, simulation often faces situations where variability is an integral part of the problem. Thus, stochastic noise further complicates the simulation optimization problem. The purpose of this paper is to resolve this constrained stochastic simulation optimization problem efficiently and effectively.

3605

Accordingly, an ESOO algorithm that combines the superiority of simulation and meta-heuristic algorithm is presented in next section. 3. Embedding the evolutionary strategy in ordinal optimization As indicated in OO theory (Ho et al., 2007; Shen et al., 2009), the computing resource and budget can be allocated by iteratively and adaptively sample subspaces of a large search space in order to locate good designs with high probability. Using ES as a way to iteratively select N excellent candidate solutions from a large search space is a way to do iterative ordinal optimization. Heuristic methods for obtaining N excellent candidate solutions may depend on how well one’s knowledge about the considered system. For instance in the wafer testing process with discrete threshold variables, Horng and Lin proposed an algorithm based on the OO theory and engineering intuition to select N excellent discrete threshold vectors (Horng & Lin, 2009). However, the engineering intuition may work only for specific systems. Thus, to select N roughly good solutions from H without consuming much computation time, we need to construct a rough model that is computationally easy to approximate E[f(S)] of (1) for a given schedule S, and use an efficient method to select N roughly good schedules. The proposed rough model is constructed basing on a stochastic simulation with short simulation length, and the selection method is the (l + k) evolution strategy, abbreviated as (l + k)-ES (Beyer & Schwefel, 2002; Mezura-Montes & Coello, 2008). 3.1. Selecting N roughly good schedules S from H To avoid an accurate but lengthy stochastic simulation to approximate E[f(S)] for a given schedule S, a rough model is used to evaluate F(S). The rough model is constructed basing on a stochastic simulation with an appropriate basic number of replications Lr, say 368. For the sake of simplicity in expression, we let Fr(S) denote the sample mean of objective value for a given S computed by rough model. By the aid of the above objective value (or the so-called fitness in ES terminology) evaluation model, we can efficiently select N excellent schedules from H using ES. Since ES improves a pool of populations from iteration to iteration, it should best fit our needs. ES is based on the mechanism of natural selection in biological systems and uses a structured but randomized way to utilize genetic information in finding a new search direction. The improvement process is accomplished using genetic operators such as recombination and mutation. 3.1.1. Representation Individual representation is a key factor affecting solution quality and efficiency of ES. In solving the SJSSP using ES, the first task is to represent a schedule as an individual. Because of the existence of the precedence constraints, not all the randomly permutations of operations define feasible schedules. Thus, a precedence-based permutation representation is utilized in ES that encodes a schedule satisfying the processing sequence of operations in which each component corresponds to an operation. For a SJSSP comprising 3 jobs on 3 machines, the operation environment vector is given in Table 1. The due dates of jobs J1, J2 and J3 are 11, 12 and 8, respectively. From Table 1, the precedence constraints of jobs J1, J2 and J3 are that O1,1 precedes O1,2 which precedes O1,3, O2,3 precedes O2,2 which precedes O2,1, and O3,2 precedes O3,1 which precedes O3,3, respectively. An individual containing 9 components in ES terminology represents a feasible schedule S, and every schedule is encoded by precedence-based permutation. Examples of the precedence-based permutation encoding of an individual are shown in Fig. 1. Any individual encoding by precedence-based per-

3606

S.-C. Horng et al. / Expert Systems with Applications 39 (2012) 3603–3610

Table 1 Operation environment vector of SJSSP with 3 jobs on 3 machines.

J1 J2 J3

M1

M2

M3

1, 2, 0 3, 3, 0 2, 4, 0

2, 4, 0 2, 5, 0 1, 2, 0

3, 5, 0 1, 2, 0 3, 4, 0

mutation is feasible since it satisfies the precedence constraints (2). One advantage of the precedence-based permutation encoding is that no infeasible solution will be generated, and another advantage is that the size of search space can be reduced. For a SJSSP with n jobs and m machines, the size of search space for randomly permutation encoding is ðn  mÞ!; however, the size of search space for precedence-based permutation encoding is reduced to ðn  mÞ !=ðm!Þn . Once the precedence-based permutation encoding of an individual is determined, the disjunctive graph model (Brucker & Knust, 2006, chap. 4) is used to compute the completion time Ci of each job Ji. A vertex for each operation exists in the disjunctive graph; additionally, two dummy vertices exist, B and D, which represent the beginning and end of a schedule, respectively. Conjunctive arcs between operations represent the precedence constraints on the operations of the same job. Disjunctive arcs determining by a schedule represent that each machine can handle at most one operation at the same time. Each arc is labeled by a weight pi,k corresponding to the processing time of operation Oi,k. For example, the corresponding disjunctive graph of the schedule S1=[O2,3O3,2 O3,1O1,1O1,2O2,2O1,3O3,3O2,1] is shown in Fig. 2. Without loss of generality, the processing time pi,k of each operation Oi,k is assumed to be the mean hi,k shown in Table 1. The first component of S1 is O2,3 whose processing time is 3. Since M2 is idle when job J2 on M3, the next schedulable operation is the second component O3,2 whose processing time is 4. After the operation O3,2 is finished, job J3 is passed on M1 immediately with processing time of 2. After finishing the operation O3,1, the next schedulable operation is O1,1 whose processing time is 2. When O1,1 is finished, job J1 is passed on M2 immediately with processing time of 4. Because M2 and M3 are idle when O1,2 is finished, job J2 is passed on M2 and job J1 is passed on M3 immediately both with processing time of 5. Once O1,3 is finished, the completion time of job J1 is 17. At the same time because M1 and M3 are idle, the next schedulable operations are the eighth component O3,3 whose processing time is 4 and the last component O2,1 whose processing time is 2. Once O3,3 and O2,1 are finished, the completion times of jobs J3 and J2 are 21 and 19, respectively. Fig. 3 shows the Gantt chart of the schedule S1. The next task of ES is to construct an initial population. Randomly generated l, say 1000, individuals by precedence-based permutation encoding as the initial population. The fitness of each individual is set to be the corresponding estimated Fr(S) based on the rough model. After initial population has been produced and evaluated, genetic operations are used to evolve and alter the genetic composition of individuals, such that new solutions can be introduced and evaluated. Genetic evolution takes place by means of three basic genetic operators: (a) recombination, (b) mutation, and (c) selection. A reproduction step applies operators such as recombination and mutation to these individuals to produce a new population that is fitter than the previous one. A selection step selects individuals from the old population. A pair of individuals is chosen by random sampling without replacement from initial population. These two individuals will be the parents for recombination.

Fig. 1. Examples of the precedence-based permutation encoding of an individual.

Fig. 2. The disjunctive graph of schedule S1.

Fig. 3. The Gantt chart of schedule S1.

3.1.2. Recombination A recombination operation is considered as the primary process for exploration in ES. Recombination is done by recombining information of two parents to provide a powerful exploration capability. There are many recombination operators such as discrete recombination, intermediate recombination, line recombination, and binary valued recombination. In our approach, a discrete recombination (Beyer & Schwefel, 2002) is employed. Discrete recombination performs an exchange of elements between the individuals. For each position the parent who contributes its element to the offspring is chosen randomly with equal probability. The individuals of the two parents selected are combined to generate new individuals that inherit segments of information stored in parent individuals. The offspring generated by discrete recombination conserved appropriate characteristics from parents; however, the offspring generated after exchanging may be infeasible. Thus, the offspring must be adjusted to be feasible after discrete recombination. When infeasible solution appears frequently, but can be adjusted to a feasible solution without too much computational effort, repair operator is efficient and effective. Therefore, we selected the repair operator to adjust the infeasible solutions of the individuals. The repair operator performs the deletion of duplicated components and the replenishment of missed components. Fig. 4 shows an example of discrete recombination and repair operator for a SJSSP with 3 jobs on 3 machines. A pair of parent individuals P1 and P2 is chosen via random sampling without replacement from the population. Using a discrete recombination, suppose the cut-position is randomly selected at the 4th position. A number of elements are selected from one parent and copied to the offspring. A discrete recombination can preserve the relative position for each component properly between parents and their ^ 1 generated by disoffsprings. However, the resulting offspring C crete recombination from their parents do not satisfy the precedence constraints (2). After applying the following repair ^ 1 will operator to newly generated offspring, infeasible offspring C be adjusted to feasible offspring C1. The duplicated component O1,2 is deleted and the missed component O3,1 is replenished. The missed component O3,1 can be inserted into the position generated

S.-C. Horng et al. / Expert Systems with Applications 39 (2012) 3603–3610

3607

Fig. 6. Example of insertion mutation operator.

Fig. 4. Example of discrete recombination and repair operator.

by randomly choosing between components O3,2 and O3,3. This procedure legalizes the resulting offspring C1. Fig. 5 shows the Gantt chart of offspring C1 after recombination. The completion times of jobs J1, J2 and J3 for offspring C1 are 11, 17 and 16, respectively. 3.1.3. Mutation Although recombination is the main genetic operator exploring the information included in the current generation, it does not produce new information. Mutation is the main source of variation responsible for the injection of new information and gives new characteristics that do not exist in the parent population. An insertion mutation operator is used in ES described as below. One job Ji is randomly selected for each individual. The insertion mutation operator is applied to all components Oi,k of job Ji. First, all components Oi,k of job Ji are deleted and the relative positions for each component of other jobs Jj, j – i, are preserved. Then, each component Oi,k is randomly inserted into the preserved components Oj,k based on the processing sequence of job Ji. Fig. 6 shows an example of insertion mutation operator for a SJSSP with 3 jobs on 3 machines. Suppose the insertion mutation operator is applied to all components O1,k of job J1. First, all components O1,k of job J1 are deleted and the relative positions for each component of other jobs Oj,k in individual C1 are preserved for j – 1, thus the offspring ^I1 is obtained. Second, each component O1,k is randomly inserted into ^I1 based on processing sequence of job J1. This procedure legalizes the resulting offspring I1. Since precedence constraints for all jobs are preserved, the resulting offspring is still feasible. Fig. 7 shows the Gantt chart of offspring I1 after mutation. The completion times of jobs J1, J2 and J3 for offspring I1 are 15, 15 and 10, respectively. 3.1.4. Selection and termination The selection mechanism utilized in our approach is (l + k)selection. Selection is based on the ranking of the approximate fitness values Fr(S) taking the best l individuals for the next generation from both the old l parents and the k offspring generated from these parents. Schedules with high fitness values have a high probability of contributing new offspring to the next generation.

Fig. 7. The Gantt chart of offspring I1 after mutation.

In our approach, the ES is stopped when the number of generations exceeds kmax. After the applied ES stopped, we pick the best N (=1000) individuals according to the approximate fitness values from both the l parents and k offsprings of the last generation, which are the N roughly good schedules that we seek. 3.2. The (l + k)-ES algorithm Based on the above description, we can summarize the (l + k)ES algorithm associated with the rough model for solving (3). Algorithm I: The (l + k)-ES algorithm Step 1. Initialization ð0Þ Initialize k = 0 and set the value of kmax. Randomly generate Si , i = 1, . . ., l, to form an initial population based on the precedence-based permutation encoding. Evaluate the approximate ð0Þ ð0Þ fitness values F r ðSi Þ of each individual Si using the rough model. Step 2. Recombination ðkþ1Þ Generate k new offsprings Si , i ¼ 1; . . . ; k from the l parents ðkÞ Si , i = 1, . . ., l, with discrete recombination. Perform the repair operator to the infeasible offspring. Step 3. Mutation For i ¼ 1; . . . ; k, perform the insertion mutation of each new offðkþ1Þ ðkþ1Þ spring Si . Evaluate the approximate fitness values F r ðSi Þ ðkþ1Þ of each mutated offspring Si using the rough model. Step 4. Selection ðkþ1Þ Select the best l individuals Si according to the approximate ðkÞ fitness values from the union of l parents Si and k mutated ðkþ1Þ offsprings Si . Step 5. Termination If k P kmax stop; otherwise, set k = k + 1 and go to Step 2. 3.3. Finding the good enough schedule from the N

Fig. 5. The Gantt chart of offspring C1 after recombination.

Starting from the selected N roughly good schedules, we will evaluate the sample mean of objective value for each schedule using a more refined model than the rough model. This more refined model composes of multiple phases, which allocate the computing

3608

S.-C. Horng et al. / Expert Systems with Applications 39 (2012) 3603–3610

resource and budget by iteratively and adaptively selecting the candidate solution set. The candidate solution set in each subphase will be reduced gradually. At each subphase, remaining solutions are evaluated and some of them are eliminated, and the best one obtained in the last subphase is the good enough schedule that we look for. The more refined models for estimating F(S) of a schedule S employed in these subphases are stochastic simulations with various simulation lengths L ranging from very small to very large. We let L1 = e  Lr and set the simulation length in subphase i, denoted by Li, to be Li = e  Li1 (or Li = ei  Lr), i = 2, 3, . . . We let N1 = N and set the size of the selected estimated good enough subset in subphase i to be Ni = Ni1/e (or Ni = N/ei1), i = 2, 3, . . . If the obtained value of Ni is not an integer, we round off it to the nearest integer. We denote ns as the total number of subphases that can be determined by

  ns ¼ arg minðLr  ens 1 6 Le < Lr  ens ; 1 6 N=ens 1 < 10Þ ; ns

ð4Þ

where Le = 105. The above formula determines ns to be the minimum of the following: (i) the ns such that the simulation length Lns exceeds the simulation length of exact model, Le, and (ii) the size of the selected candidate solution set resulted in last subphase ns is small enough, i.e. Nns < 10. Once ns is determined, from subphase i = 1 to ns  1, we use the stochastic simulation with the simulation length Li = ei  Lr to estimate the F(S) of the Ni = N/ei1 candidates in subphase i, then we rank the N/ei1 candidates based on their estimated F(S) and select the best N/ei candidates as the candidate solution set for subphase i + 1. Finally, we use a stochastic simulation with simulation length Le to compute Fe(S) of the Nns solutions in subphase ns. In fact, the more refined model is the exact model of (3) in the last subphase, and the solution with smallest Fe(S) is the good enough solution that we look for. 3.4. The ESOO algorithm Now, the proposed ESOO algorithm can be stated as follows: Algorithm II: The ESOO algorithm Step 1: Choose an appropriate basic simulation length Lr to evaluate Fr(S) for a given S and determine the number of subphases ns by (4). Step 2: Randomly generate l S’s as the initial population and apply Algorithm I. After reaching the maximum generation kmax, rank all the final l + k individuals from both the l parents and k offsprings based on their approximate fitness values and select the best N individuals. Step 3: From i = 1 to ns  1, use the stochastic simulation with simulation length Li = ei  Lr to estimate F(S) of the Ni = N/ei1 candidates; rank these N/ei1 candidates based on their estimated F(S) and select the best N/ei candidates as the candidate solution set for subphase i + 1. Step 4: Use the stochastic simulation with simulation length Le to compute Fe(S) of the N=ens 1 candidates. The one with the smallest Fe(S) is the good enough schedule that we seek.

4. Test results and comparisons 4.1. Test example In order to illustrate the computational efficiency of the ESOO algorithm, a numerical experimentation comprising 8 jobs on 8 machines has been carried out. The size of search space H for this example is 64! ¼ 1:27  1089 . The operation environment vector ½ai;k ; hi;k ; r2i;k  is given in Table 2. In each element ðai;k ; hi;k ; r2i;k Þ, ai,k denoted the processing sequence of Oi,k, hi,k denoted the mean of random processing time pi,k, and r2i;k denoted the variance of pi,k. The due dates di of each job Ji are given in Table 3. The delay cost parameter ai and the holding cost parameter bi are set to 1 for all jobs. We test the computational efficiency and the obtained solution quality of our algorithm using three distributions of random processing time on the machines. The first distribution is truncated normal distribution with parent mean hi,k and parent variance r2i;k . The second distribution is uniform distribution in the interval [hi,k  3  ri,k, hi,k + 3  ri,k]. The third distribution is exponential distribution with mean value hi,k. All the test results shown in this section are simulated in a Pentium IV PC. The parameters employed in each step of the ESOO algorithm are described below: Lr = 368 in Step 1, l = 1000, k = 2000, kmax = 100 and N = 1000 in Step 2, ns = 6 in Step 3, Le = 105 in Step 4. To apply Step 1 of Algorithm II, an appropriate basic simulation length Lr = 368 is chosen, and the number of subphases ns = 6 is determined by (4). With the objective value evaluated rough model, we are ready to apply Step 2 of Algorithm II to select N (=1000) excellent schedules from H using Algorithm I. We start from randomly generated l (=1000) schedules as the initial population. The fitness value of each schedule is calculated from the objective function of (3) based on the rough model. After the Algorithm I evolves for kmax = 100 generations, we rank all the final 3000 individuals based on their approximate fitness values and select the best N (=1000) individuals. Starting from the N (=1000) schedules obtained in Step 2, we apply Step 3 to compute the objective value of (3) for each schedule using a more refined model, which is a stochastic simulation with various simulation lengths j. The parameters in Step 3 of Algorithm II are set as follows: L1 = 1000, ns = 6, Li = ei  Lr, and Ni = N/ ei1 for i = 1, . . . , 5. Table 4 shows the simulation length and number of candidate solution in each subphase of Step 3 in Algorithm II. In the last subphase, we apply Step 4 that uses the stochastic simulation with simulation length Le = 105 to compute the Fe(S) of the N6 = 7 candidate solutions. The one with the smallest Fe(S) is the good enough schedule that we look for. Based on the above setup of parameters, the good enough schedule, denoted by Sg, and the corresponding objective value F(Sg) obtained by our algorithm for three distributions are shown in Table 5. As can be observed, the consumed CPU times for three distributions in this example are all within 6 min, which is short enough that we can apply our algorithm real-time. From Table 5, we found that uniform distribution results in manufacturing the SJSSP with

Table 2 Operation environment vector of SJSSP with 8 jobs and 8 machines.

J1 J2 J3 J4 J5 J6 J7 J8

M1

M2

M3

M4

M5

M6

M7

M8

3,70,140 1,80,160 1,50,100 2,60,120 4,50,100 2,60,120 1,40,80 2,90,180

2,80,160 2,40,80 2,40,80 1,50,100 3,50,100 3,80,160 3,60,120 1,70,140

1,90,180 3,50,100 3,80,160 3,60,120 2,70,140 1,90,180 4,40,80 3,50,100

6,50,100 5,90,180 5,60,120 4,70,140 1,40,80 5,70,140 2,80,160 4,60,120

4,40,80 4,40,80 4,70,140 7,80,160 7,50,100 6,50,100 5,60,120 5,90,180

8,60,120 7,50,100 6,40,80 5,40,80 5,60,120 4,40,80 7,70,140 7,80,160

5,70,140 6,60,120 8,40,80 6,50,100 6,90,180 8,80,160 8,50,100 6,40,80

7,50,100 8,40,80 7,70,140 8,80,160 8,60,120 7,90,180 6,60,120 8,40,80

3609

S.-C. Horng et al. / Expert Systems with Applications 39 (2012) 3603–3610 Table 3 Due dates for each job.

Table 6 Comparison of F(Sg) and RP for truncated normal distribution.

Ji

J1

J2

J3

J4

J5

J6

J7

J8

Method

ESOO

CR

EDD

SPT

LPT

FCFS

di

490

510

540

500

540

470

530

560

F(Sg) RP (%)

2280 0.001

2780 0.003

3230 0.054

2530 0.002

4640 8.663

3130 0.021

Table 4 Number of candidate solution and simulation length in Step 3 of ESOO algorithm. Subphase i

1

2

3

4

5

6

Ni Li

1000 1000

368 2718

135 7389

50 20,085

18 54598

7 100,000

highest total cost. For the case of truncated normal distribution, the total cost is lower than for other probability distributions. 4.2. Test comparisons and performance evaluation In this section, we will show the test results of applying the proposed algorithm and demonstrate the solution quality by comparing with those obtained by the typical dispatching rules. There are five typical dispatching rules: critical ratio rule (CR), earliest due date (EDD), shortest processing time first (SPT), longest processing time first (LPT), and first come first served (FCFS) (Sule, 2007). The CR rule schedules the jobs according to the ratio of the time left until the due date and the remaining processing time. The EDD rule processes the jobs so that the job that has the earliest due date is first to be completed. The SPT rule sequences the jobs so that the job that takes the shortest time to process is first to be completed. The LPT rule gives the priority to the job that has the longest processing time to be scheduled first. The FCFS rule sequences the jobs starting with the current time period and working forward. We have used the five typical dispatching rules to solve (3) using the same simulation length (Le = 105) in the exact model and compare the objective values for three distributions. We also provide the performance analysis of our approach to justify the global goodness of the obtained solution. To proceed with the proposed performance evaluation, it is interesting to know how good the solution we obtained is among the search space H. To answer this question, we need to carry out the performance analysis. Since it is impossible to evaluate the objective values of all solutions in search space, we intend to select a representative subset, denoted as X. Since the search space size of the considered problem Table 5 The good enough schedule Sg and corresponding F(Sg) obtained by ESOO algorithm. Distribution Sg Truncated normal

O4,2 O4,1 O4,4 O7,4 O4,5 O3,7 O8,7 Uniform O5,4 O6,1 O4,3 O7,7 O3,8 O8,1 O8,5 Exponential O6,3 O7,1 O7,2 O7,8 O3,8 O8,3 O8,7

F(Sg) CPU time (s) O2,1 O5,1 O8,2 O2,6 O1,8 O6,2 O8,6 O7,1 O5,3 O2,4 O5,2 O3,7 O1,4 O8,7 O3,1 O1,1 O7,3 O1,8 O3,7 O4,6 O4,8

O5,4 O6,3 O8,1 O1,7 O4,8 O8,3 O8,8 O3,1 O2,2 O6,2 O3,4 O6,8 O5,8 O8,6 O3,2 O7,4 O7,5 O8,2 O2,8 O6,8 O8,6

O1,3 O2,3 O5,7 O6,1 O3,6 O6,6 O6,7 O2,1 O7,3 O3,3 O4,4 O5,7 O8,3 O8,8 O1,3 O5,3 O5,2 O6,6 O1,6 O4,7 O8,8

O5,3 O2,5 O4,6 O5,8 O1,6 O8,4

O3,1 O1,1 O3,3 O1,4 O7,3 O6,4

O2,2 O4,3 O2,7 O7,2 O3,8 O6,5

O5,2 O2,4 O4,7 O2,8 O7,5 O6,8

O3,2 O7,1 O5,5 O3,5 O7,8 O8,5

O1,2 2280 352.23 O5,6 O1,5 O3,4 O7,6 O7,7

O6,3 O2,3 O2,7 O5,1 O4,6 O1,8

O1,3 O7,5 O2,6 O3,6 O1,5 O4,7

O4,2 O2,5 O7,6 O1,1 O1,7 O4,5

O3,2 O1,2 O3,5 O5,6 O5,5 O4,8

O7,4 O7,8 O6,6 O6,4 O6,7 O8,4

O7,2 2778 356.84 O4,1 O2,8 O6,5 O8,2 O1,6

O3,3 O2,2 O6,1 O5,1 O7,6 O5,5

O1,2 O1,5 O1,7 O6,4 O5,6 O8,4

O5,4 O2,3 O2,4 O4,4 O6,5 O5,8

O2,1 O4,2 O4,3 O3,6 O5,7 O8,5

O3,5 O2,5 O6,2 O2,7 O7,7 O4,5

O3,4 2638 347.68 O4,1 O1,4 O2,6 O8,1 O6,7

Table 7 Comparison of F(Sg) and RP for uniform distribution. Method

ESOO

CR

EDD

SPT

LPT

FCFS

F(Sg) RP (%)

2778 0.001

3734 0.07

3093 0.003

2823 0.002

4707 2.363

3660 0.052

Table 8 Comparison of F(Sg) and RP for exponential distribution. Method

ESOO

CR

EDD

SPT

LPT

FCFS

F(Sg) RP (%)

2638 0.001

5584 1.203

3417 0.004

3365 0.003

6051 2.912

4262 0.039

is |H| = 1.27  1089, we randomly sample |X| (=16,641) feasible schedules from H as the representative subset. The sample size |X| = 16,641 for the discrete solution space H with |H| = 1.27  1089 is determined basing on the sample size formula that takes the finite population correction factor into account for confidence level 99% and confidence interval 1% (Devore, 2008). We used the same simulation length (Le = 105) in the exact model to compare the objective values of these representative schedules for three distributions. We perform a ranking percentage analysis to demonstrate how well the rank of the good enough solution obtained is in the representative subset X. We define the ranking percentage (RP) of a good enough solution in X as RP ¼ jXr j  100%, where r is the rank of a good enough solution in X which can be easily determined from its objective value. Comparisons of the objective value and ranking percentage between five dispatching rules and our algorithm in the truncated normal distribution, uniform distribution, and exponential distribution are shown in Tables 6–8, respectively. As can be observed, our algorithm outperforms the five dispatching rules for three distributions. 5. Conclusion To cope with the computationally intractable SJSSP, an evolutionary algorithm of embedding ES in OO is proposed to solve for a good enough schedule with the objective of minimizing the expected sum of storage expenses and tardiness penalties using limited computation time. The proposed algorithm is applied to a SJSSP comprising 8 jobs and 8 machines with random processing time in truncated normal, uniform, and exponential distributions. The simulation test results obtained by the proposed algorithm are compared with five typical dispatching rules. Test results demonstrated that the good enough schedule obtained by our algorithm is very successful in the aspects of solution quality and computational efficiency. Besides, the proposed approach is not limited to the problem considered in this paper. In fact, it can be used to solve any combinatorial ordering optimization problem that requires lengthy computational time to evaluate the performance of an ordering solution. References Beyer, H. G., & Schwefel, H. P. (2002). Evolution strategies: A comprehensive introduction. Natural Computing, 1(1), 3–52. Brucker, P., & Knust, S. (2006). Complex scheduling (pp. 189–193). Berlin, Heidelberg: Springer-Verlag.

3610

S.-C. Horng et al. / Expert Systems with Applications 39 (2012) 3603–3610

Chakraborty, U. K. (Ed.). (2009). Computational intelligence in flow shop and job shop scheduling. Berlin: Springer. Devore, J. L. (2008). Probability and statistics for engineering and science (7th ed.). CA: Thomson Brooks/Cole. Dréo, J., Pétrowski, A., Siarry, P., & Taillard, E. (2006). Metaheuristics for hard optimization: methods and case studies. Heidelberg, Berlin: Springer-Verlag. Fattahi, P., Mehrabad, M. S., & Jolai, F. (2007). Mathematical modeling and heuristic approaches to flexible job shop scheduling problems. Journal of Intelligent Manufacturing, 18(3), 331–342. Gholami, M., & Zandieh, M. (2009). Integrating simulation and genetic algorithm to schedule a dynamic flexible job shop. Journal of Intelligent Manufacturing, 20(4), 481–498. Gu, J. W., Gu, M. Z., Cao, C. W., & Gu, X. S. (2010). A novel competitive coevolutionary quantum genetic algorithm for stochastic job shop scheduling problem. Computers and Operations Research, 37(5), 927–937. Gu, J. W., Gu, X. S., & Gu, M. Z. (2009). A novel parallel quantum genetic algorithm for stochastic job shop. Journal of Mathematical Analysis and Applications, 355(1), 63–81. Ho, Y. C., Zhao, Q. C., & Jia, Q. S. (2007). Ordinal optimization: Soft optimization for hard problems. New York: Springer-Verlag. Horng, S. C., & Lin, S. S. (2009). An ordinal optimization theory based algorithm for a class of simulation optimization problems and application. Expert Systems with Applications, 36(5), 9340–9349. Keskin, B. B., Melouk, S. H., & Meyer, I. L. (2010). A simulation-optimization approach for integrated sourcing and inventory decisions. Computers and Operations Research, 37(9), 1648–1661.

Lei, D. M. (2010). A genetic algorithm for flexible job shop scheduling with fuzzy processing time. International Journal of Production Economics, 48(10), 2995–3013. Li, J., Sava, A., & Xie, X. L. (2009). Simulation-based discrete optimization of stochastic discrete event systems subject to non closed-form constraints. IEEE Transactions on Automatic Control, 54(12), 2900–2904. Mezura-Montes, E., & Coello, C. A. C. (2008). An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. International Journal of General Systems, 37(4), 443–473. Renna, P. (2010). Job shop scheduling by pheromone approach in a dynamic environment. International Journal of Computer Integrated Manufacturing, 23(5), 412–424. Shen, Z., Ho, Y. C., & Zhao, Q. C. (2009). Ordinal optimization and quantification of heuristic designs. Discrete Event Dynamic Systems, 19(3), 317–345. Sule, D. R. (2007). Production planning and industrial scheduling: Examples case studies and applications (2nd ed.). Boston: PWS Publishing. Tavakkoli-Moghaddam, R., Jolai, F., Vaziri, F., Ahmed, P. K., & Azaron, A. (2005). A hybrid method for solving stochastic job shop scheduling problems. Applied Mathematics and Computation, 170(1), 185–206. Zandieh, M., & Adibi, M. A. (2010). Dynamic job shop scheduling using variable neighbourhood search. International Journal of Production Research, 48(8), 2449–2458. Zhang, R., & Wu, C. (2010). A hybrid approach to large-scale job shop scheduling. Applied Intelligence, 32(1), 47–59. Zhou, R., Nee, A. Y. C., & Lee, H. P. (2009). Performance of an ant colony optimisation algorithm in dynamic job shop scheduling problems. International Journal o Production Research, 47(11), 2903–2920.