Simplified Cuckoo Search - Editorial Express

2 downloads 53311 Views 416KB Size Report
minimal parameters and domain-problem representation bias, and compare it to standard ..... species that look out for best hosts to breed their eggs. According ...
Simplified Cuckoo Search A robust metaheuristic for agent-based artificial markets Martin Prause · Juergen Weigand

May, 2014

Abstract A metaheuristic, such as a genetic algorithm or a particle swarm algorithm, combines a guided local search process (intensification) with a global search process (diversification) for approximating solutions of optimization problems without explicit domain knowledge. Given those features metaheuristics are a popular method for simulating interaction of bounded rational agents in an artificial market. However, regarding the economic interpretations for individual and social learning, the search process (1) is dependent on various algorithm parameters, (2) may be biased due to its problem representation or (3) has by default inherent mechanics with fuzzy economic meaning. Therefore, a common approach is to focus on the aggregate outcomes of such metaheuristics and compare it to human behaviour and decision making. We discuss a simplified cuckoo search algorithm, categorize it as an evolutionary strategy with minimal parameters and domain-problem representation bias, and compare it to standard genetic algorithms in various Cournot and Bertrand models used in the agent-based computational economics literature. Our findings indicate that this approach performs at least as good as genetic algorithms on the standard artificial market models to predict the theoretical outcomes, but without the need to adjust any algorithm parameters. This qualifies the simplified version of cuckoo search as a generic and robust alternative to the established Martin Prause WHU - Otto Beisheim School of Management Burgplatz 2, 56179 Vallendar, Germany Tel.: +49-261-6509-273 Fax: +49-261-6509-279 E-mail: [email protected] Juergen Weigand WHU - Otto Beisheim School of Management Burgplatz 2, 56179 Vallendar, Germany Tel.: +49-261-6509-270 Fax: +49-261-6509-279 E-mail: [email protected]

2

Martin Prause, Juergen Weigand

metaheuristics. It thus may have a wider application range (e.g. computerized business games) for simulating black-box individual and social learning. Keywords Bertrand oligopoly · Cournot oligopoly · Evolutionary strategy · Genetic algorithm · Robustness · Social learning Mathematics Subject Classification (2000) MSC 91-08

1 Introduction Agent-based computational economics (ACE) is based on simulating the interactions of a set of autonomous agents in an economic setting to analyse the decision-making and learning processes and evolution of behavioural norms in a stylized economic environment [28]. One major research area in this field focuses on modelling the mindset of an agent, in particular the individual and social learning processes, to converge to specific equilibria. The popular methods for mimicking such learning processes in computational models emerge from the field of computational intelligence for single- and multi-objective optimization [11]. In ACE, these methods are applied in two ways: (1) by simulating the process of real human decision making and learning, and (2) by computing the same human decision or learning outcome, without focusing on an accurate model (the process) of human decision making and learning. The prominent examples for this are evolutionary algorithms or particle swarm algorithms. Inspired by the best practices of survival and adaptation in nature, these methods translate the mechanics of biological information processing into algorithmic optimization methods [27]. These optimization methods are Monte Carlo algorithms benefitting from optimally balancing optimum approximation and time/space consumption without explicit problem-domain knowledge. They can be self-adaptive and fault tolerant to searching for optimal solutions locally and globally in static and changing environments [6]. Typically, these metaheuristics are based on a population of agents converging (parallel and independently) stepwise to specific equilibria or optimal solutions. This convergence originates from a guided search process of intensification and diversification without specific domain knowledge [8]. The elements of metaheuristic search processes of autonomous agents can be interpreted as proxy for human decision making and learning [10]. For example, imitation and mutual exchange [7], simultaneously learning about the environment and maximizing the objective function [3], individual and social learning [31, 4], innovation [29], self-adaptation [23] in changing environments [14], or memorization [30]. However, the economic interpretation of the learning behaviour of autonomous agents should be undertaken cautiously. Although multiple approaches exist to compare an agents learning (adaptation) behaviour with the results of human subject experiments and analyse their behavioural (dis)similarities [3, 17, 9], the economic interpretation remains difficult.

Simplified Cuckoo Search

3

First, from a conceptual perspective, Duffy (2006) argues in [12] that (1) evaluating the fitness function is easy for agents but probably difficult for humans, (2) the learning of agents is not belief based unlike human decision making, and (3) evolution itself is often a slow process and does not represent learning in a fast-paced environment. Second, from a technical perspective, the performance and search behaviour of metaheuristics is shaped by its parameters, problem representation, and implementation of evolutionary mechanics. If the outcomes are significantly dependent on algorithm parameters and therefore lack robustness, the economic meaning of the inherent search process becomes fuzzy. Third, the problem mapping of an economic model may lead to representation biases. Waltman et al. (2011) show that a binary encoding of a genetic algorithm (GA) applied to a specific Cournot model leads to premature convergence whereas real number encoding does not. Finally, specific mechanics such as the selection mechanics of GAs combine multiple learning steps, which should be separated for one to have a more reasonable economic meaning. In an attempt to explain these discrepancies, Geisendorf (2011) suggests in [15] that the selection mechanics of a GA be split into internal and external selections to account for individual cognitive learning and market evolution and competition. In [1, 2] Alkemade et al. (2009) argue that disentangling the algorithmic parameters from economic model parameters would support better converging properties and robustness. More radically, Duffy (2006) proposes an approach which differentiates between economic interpretation of specific mechanics and the black-box modelling of individual and social learning based on its application. Specifically, in business simulation games, artificial agents should produce the same output as humans do and the issue of whether the endogenous processes of learning have economic meaning is of less interest. With regard to business simulation games, this article presents a robust metaheuristic for various artificial markets based on black-box modelling of social learning. We examine a promising nature-inspired, population-based algorithm from the field of computational optimization: the Cuckoo Search with L´evy flights introduced in [34]. Like most of the nature-inspired metaheuristics, CS imitates the evolutionary learning behaviour of well-established biological systems - the brood parasite behaviour of cuckoos to find the best nest for its offspring. This behaviour is translated into a population-based guided search algorithm with a random walk step size based on a heavy-tailed probability distribution, the L´evy distribution [21]. Its potential has been demonstrated in several standard and stochastic test functions and in engineering design optimization problems [35]. From a conceptual perspective, we show that CS is a metaheuristic relying on an evolutionary strategy (ES). However, it has much fewer parameters, making it more generic and probably more robust for agent-based artificial markets [34]. We will proceed in two steps. First, the CS metaheuristic is introduced and compared at a conceptual level with the genetic algorithm. Then, for

4

Martin Prause, Juergen Weigand

aqualitative assessment, we apply and compare the two approaches in the standard oligopoly settings of Cournot quantity competition and Bertrand price competition.

2 Similarities and dissimilarities between GAs and CS Evolution-based, nature-inspired search heuristics in ACE follow a common scheme pattern of iterative and subsequent mechanics. In general, the process is as follows (cf. algorithm 1): A population is a set of individuals representing solutions to a specific problem. Each solution (individual) can be evaluated (quantified) from an objective function which determines the so-called fitness of a solution, or simply the value of the solution, in a search space (fitness landscape). The evolutionary process consists of the selection and variation operators applied to individuals, and is repeated until a specific stop criterion, typically the convergence of the population towards a specific fitness level, is reached. The first selection operator is called SELECTION FOR REPRODUCTION and it selects the parent individuals for a child population. The child population individuals are changed by VARIATION OPERATORS, that is, by recombination and/or mutation, to exploit and explore the fitness landscape. Finally, the SELECTION FOR REPLACEMENT mechanism determines which individuals from the parent and/or child population will survive (elitism) the iteration and build the population for the next generation. The actual implementation of the core elements, that is, the representation of an individual, selection for reproduction, variation of the child population, and selection for replacement, defines the class of the evolutionary algorithm, such as a GA [16], genetic programming [20], an ES [26], or evolutionary programming [13]. In this study, we focus on the GA and ESs, because generally genetic programming and evolutionary programming are sub-groups which focus particularly on optimizing computer programmes rather than analytical optimization problems.

Algorithm 1 Conceptual steps of an evolutionary algorithm Require: Objective function INITIALIZE population EVALUATE population repeat SELECTION FOR REPRODUCTION VARIATION EVALUATE population SELECTION FOR REPLACEMENT until Stop criterion

Simplified Cuckoo Search

5

2.1 GAs: Conceptual background and implementation in ACE A widely studied metaheuristic in ACE is the GA. A formal definition of this can be found in [18]. In essence, a GA is based on the binary encoding of a problem domain, which means that an individual of the population is a binary string (genotype) representing a solution of the problem domain (phenotype). For example, an n-bit string (e.g. n = 8 : 00100101) represents the real-value quantity decision of 37 of a firm in Cournot competition. The population size is determined by a parameter µ ∈ N. An objective function {0, 1}n → R, f (x) determines the fitness of each individual, xi ∈ {0, 1}n , i = 1...µ, whereas f (x) could be the profit function of a firm based on the quantity represented by xi . The selection operator for reproduction is roulette wheel selection, which picks µ/p, p ∈ N pairs of parents to create a new population. Roulette wheel selection ensures a selection pressure (elitism) based on relative fitness but still picks some less good solutions for the new population. The variation splits in one major and one minor operator. The crossover operator is used with a probability of pc ∈ [0...1] on the µ/p pairs of parents to create the child population. A very common crossover operator is the one-point crossover operator, which takes two parents randomly, picks a position k ∈ 1, 2, ..., n, and generates two children by taking the first k bits from parent one and the second n − k bits from parent two to create child one, and vice versa for child two (cf. Appendix A). With a probability of pm ∈ [0...1], a bit-flip or mutation is performed on each child. Typically, a bit-flip or k-bit mutation flips each bit with a probability of pm or k randomly picked bits with a probability of pm . The selection for replacement is not implemented explicitly, meaning that the child population replaces the parent population entirely. Elitism thus takes place in selection at the reproduction stage. The parameters of a GA influence its convergence behaviour. The robustness of an algorithm depends on its sensitivity to parameter changes. If the outcome and search processes are highly dependent on the algorithm parameters, an economic interpretation of the search mechanics is difficult, because search behaviour does not depend on problem-related or economic artefacts. If GAs are used in ACE, one can typically determine the problem representation and objective function as well as six parameters: population size, elitism operator, crossover operator, crossover probability, mutation operator, and mutation probability. Population size can be aligned with or independent of the problem domain. In [31] Vriend (2009) refers to this as individual or social learning. The search process is interpreted as individual learning if each individual of the population acts as an autonomous agent; for example, four firms in Cournot competition are represented as four individuals. Social learning happens if multiple individuals converge to a specific outcome; for example, in a model of four firms in Cournot competition, the population converges if the average fitness of the population of n individuals (with n > 4) is a CournotNash outcome. Elitism operator determines the selection pressure, whereas roulette wheel selection maintains some diversity in the population; a pure rank selection allows only

6

Martin Prause, Juergen Weigand

the best solution for reproduction or replacement. Crossover is the main variation operator, interpreted as imitating best practices or conducting a local search. Typically, it is applied with a high probability. In contrast, mutation operator, which mimics innovative behaviour, has a low probability and maintains diversity (global search) in the population. A list of common values is shown in Appendix A. The list indicates that the actual parameters are diverse, but according to the No-Free-Lunch theorem proposed in [33], they have to be adjusted to the problem domain.

2.2 CS: Conceptual background and its general implementation The CS algorithm abstracts the parasite brood behaviour of some cuckoo species that look out for best hosts to breed their eggs. According to some studies [25, 19], the search process for the best nest to hatch an egg essentially consists of four steps. First, discover and observe a nest which is not too close or far away, without making the host suspicious; adapt to the hosts behaviour, and when the host leaves the nest, come back and lay eggs. Second, improve the breeding speed to lay eggs in the hosts nest based on the observed timings, and ensure to infiltrate the hosts nest immediately after its breeding time window when the host is away. Third, adapt for the cuckoos own eggs the host eggs colour so as to minimize the chance of revealing the foreign egg. Fourth, increase the hatching speed. As soon as a chick is hatched, the host starts evicting the other eggs to increase the chicks share of food. However, if the cuckoo notices any type of infiltration, it will immediately turn out the eggs in the nest and abandon the place. Several cuckoo species during the course of evolution specialize on one particular host type for parasite brood behaviour and even mimic their call. This parasite brood behaviour is a type of evolutionary improvement. In order to abstract meaningful mechanics for a guided optimal search process, we use the overall framework of a population-based evolutionary algorithm. The abstraction of the cuckoo parasite brood behaviour based on the evolutionary algorithm framework works as follows: 1. The nest represents a possible solution of the problem domain. A population consists of n ∈ N nests. 2. Selection for reproduction is based on a ranking of nests for the population according to an allotted fitness value. 3. The nest with the best fitness value performs a L´evy flight to become a new nest (obtains a new solution), imitating the flight behaviour of animals [34, 24]. If the new nest is better than a randomly picked old nest, the old one is abandoned and the new nest is included in the population. This mimics the behaviour of detecting an infiltrated nest and successfully hatching an egg in a new one. 4. Selection for replacement consists of removing a fraction pa of the worst nests from the population and adding random new nests.

Simplified Cuckoo Search

7

The formalized pseudocode of a CS algorithm as proposed by Yang and Deb (2009) is outlined in algorithm 2. Algorithm 2 Pseudocode of a CS algorithm as proposed by Yang and Deb (2009) Require: Objective function f (x), x = (x1 , ..., xd )T Generate initial population of n host nests xi , (i = 1, 2, ..., n) while (t Fj ) then replace j by the new solution end if A fraction (pa ) of worst nests are abandoned and new ones are built; Keep the best solutions (or nests with quality solutions); Rank the solutions and find the current best solution end while Post-process results and visualization

2.3 Conceptual comparison of a GA and CS Essentially, a CS algorithm is very similar to an (n + pa ) ES, as presented in [5]. The problem can be represented by a vector of real numbers rather than a binary string. The selection for reproduction is based on a rank selection process. The variation operator concentrates only on a mutation with pm = 1 and omits recombination. The replacement selection operator is based on plus selection (n + pa ), which means that the worst pa individuals are discarded from the total set of (n + pa ) individuals (parents and children). The mutation operator is implemented as a L´evy flight, which mimics a random walk based on a heavy-tailed distribution: r γ 1 γ L : R → R, f (x) = (1) e− 2(x−δ) , γ = 0.5, δ = 0 (3/2) 2Π (x − δ) (t+1)

is obtained from an old individual xti by performing

(t+1)

= xti + α ⊗ L(λ), α = 1, 1 ≤ λ ≤ 3,

A new individual xi a step based on

xi

(2)

where α ∈ Ris a step width and ⊗ an entry-wise multiplication operator [34]. The heavy-tailed distribution of the step length ensures a search locally and globally in one operator. This is also the main element (along with its origin) which differentiates the CS algorithm from a canonical ES as proposed by Rechenberg (1994). The similarities and dissimilarities between a GA and the CS algorithm are summarized in Table 1. By omitting the recombination

8

Martin Prause, Juergen Weigand

Table 1 Conceptual similarities and dissimilarities between a GA and CS Operator

Genetic Algorithm

Cuckoo Search

Representation Population Selection for reproduction Recombination probability Recombination operator Mutation probability Mutation operator Selection for replacement

Binary string µ Roulette wheel pc One-point crossover pm k-bit, bit-flip -

Vector of real values n Rank 1 L´ evy distribution Plus selection (n + pa )

operator, we can reduce the number of algorithm parameters to three: population size, step width alpha, and elitism parameter pa . If the variation operator based on L´evy distribution is the only operator capable of approximating optimal solutions, the robustness of this method would increase. In addition, the solution can be expressed in terms of domain space (real numbers), thus circumventing any binary representation bias. In order to reduce the number of parameters even further, the CS algorithm is simplified as follows. Instead of abandoning pa old nests, the elitism operator is replaced by an even more strict selection pressure, by which each individual is mutated by L(x) and it strictly replaces its parent if and only if its fitness is better than the parents fitness. This mimics a (1 + 1) plus selection at the individual level. It reduces the number of parameters from six in the GAs to three in CS, and finally to two, population size and step width α, in a simplified CS version. An outline of the pseudocode is shown in algorithm 3. We refer to this algorithm as an ES. Algorithm 3 Pseudocode of an evolutionary strategy Require: Objective function f (x), x = (x1 , ..., xd )T Generate initial population of n individuals xi , (i = 1, 2, ..., n) EVALUATE population repeat for xi do (t+1) Perform a L´ evy: xi = xti + α ⊗ L(λ) t+1 t if f (xi > f (xi ) then Replace xti with xt+1 i end if end for until Stop criterion

3 Test problems In order to assess the capacity of the proposed GA and ES models to simulate the outcomes of standard economic models, we use the following common mod-

Simplified Cuckoo Search

9

els: (1) A Cournot model with constant and linear marginal cost [32, 2, 31, 30, 11], and (2) a single-price Bertrand oligopoly model [9, 36] with simultaneous price and quantity setting. The first test problem consists of two Cournot models. The first Cournot model (C1), with i = 1, ..., 4 firms and constant marginal costs, is based on a downward-sloping demand function: P : R → R, P (Q) = 256 − Q

(3)

with Q ∈ R as the sum of individual firm output qi of firm i, Q=

n X

qi .

(4)

i=1

The cost function of firm i can be defined as C : R → R, C(qi ) = 56qi .

(5)

The second Cournot model (C2) is based on the same demand function as model C1; however, the firms cost function has linear marginal costs defined as C : R → R, C(qi ) = 56qi + qi2 , c, d ∈ R. (6) Here, the market consists of i = 1...10 firms. According to the Cournot theory, firms act rationally when they produce at the CournotNash output level, which stabilizes at a quantity of 40 for the C1 model and 15.4 for the C2 model. In the second test problem, we extend the C1 model with four firms by introducing two revenue-maximizing firms and two profit-maximizing firms. The profit-maximizing firms face the profit function Π : R → R, Π(qi ) = qi (256 − Q) − 56qi ,

(7)

and the revenue-maximizing firms focus on the maximizing function Π : R → R, Π(qi ) = qi (256 − Q).

(8)

By calculating the best-response functions, we obtain the equilibrium when the profit-maximizing firms produce at the level of qi = 17.6 and the revenuemaximizing firms produce at the level of qi = 73.6. In the third test problem, we use a Bertrand oligopoly by considering price as the independent variable. The Bertrand model works as follows: Each firm faces the same demand curve as in the C1 model and decides independently on its price. A firm with a constant marginal cost of 56 per unit can serve the complete market. However, firms with the lowest price will get the complete market demand based on the demand function (split equally) and the other firms will not be able to serve the market (zero profits). From the Bertrand theory, this setup will drive the prices down to the marginal cost. In the fourth problem set is based on a model by Kreps and Scheinkman (1983) in [22], which combines the price and quantity setting in one oligopoly

10

Martin Prause, Juergen Weigand

model. The maximum demand in the market is determined by qmax = 100 and consumers prefer to buy from firms offering the lowest price. The setting consists of i = 1...4 firms, each deciding on its quantity qi ∈ [0, qmax ] and price pi ∈ [1...100] simultaneously. The firms marginal cost is defined by c ∈ {0, 50}, and the costs occur directly after deciding on what quantity to produce, regardless of what will actually be sold later. The actual quantity is determined in the following manner: First, the firms offering lower prices will sell their products before those offering higher prices, until the complete demand qmax is satisfied. Second, if multiple firms offer the same price, each of them will get a proportional share of the demand. The detailed demand distribution algorithm is explained in Appendix B. The model has two corner solutions: firms collude at the monopoly pricing level of pi = 100, or they fight at the perfect competition pricing level of pi = c. However, an equilibrium exists in mixed strategies. This model was introduced by Brandts and Guillen (2007), who conducted human subject experiments. Their experiment confirmed that collusion is far more present than price wars. Zhang and Brorsen (2011) conducted the experiment with an agent-based approach, namely, with particle swarm optimization and GAs, and found that particle swarm optimization explains the results of experiments with human subjects better than GAs.

4 Simulation results and discussion For all the test problems, we carry out GA and ES simulations. While we take the GA parameters from the literature, the ES parameters remain constant across all the problem sets. According to the No-Free-Lunch theorem proposed by Wolpert and Macready (1997), the algorithms, and specifically their parameters, should be tailored to the specific problem domain to perform better (convergence speed) than the average. However, in this study, we are interested in whether the GA and ES can obtain the same results as predicted by theory, rather than the convergence speed. The GA and ES parameters for the first test problems (C1 and C2) are summarized in Table 2. In order to mimic the social learning aspect, each firm is represented by one individual of the population. The objective function for each individual is the profit function. All individuals are initialized randomly within its domain. Each algorithm is executed 1,000 times over 500 iterations (generations). The number of iterations is relatively small, because all the experiments have shown that both algorithms converge very quickly. The experimental results for model C1 with GA replicate the results of Waltman et al. (2011): the premature convergence of the GA at the production levels of 47, 50, and 64 is due to binary representation bias. Averaging the results of all the runs shows a convergence towards a quantity of 52 for model C1 and 17 for model C2. This is above the equilibrium quantity. In contrast to the GA, the ES stabilizes at the respective Cournot-Nash equilibria of 40 and 15.4, respectively. This is shown in Figure 1; the left-hand side graph shows the average market quantity over 1,000 runs with 500 iterations for the C1

Simplified Cuckoo Search

11

Table 2 Algorithm parameters for the Cournot problem sets C1 and C2 Parameter

GA (C1)

ES (C1)

GA (C2)

CS (C2)

Representation Population size Elitism operator Crossover operator Crossover probability Mutation operator Mutation probability Objective function Number of iterations Number of runs

7-bit string 4 Roulette wheel 1-point 1 Bit-flip 0.01 Profit 500 1000

x ∈ [1...127] 10 Plus sel. L´ evy 1 Profit 500 1,000

7-bit string 4 Roulette wheel 1-point 1 Bit-flip 0.01 Profit 500 1,000

x ∈ [1...127] 10 Plus sel. L´ evy 1 Profit 500 ,1000

Fig. 1 Algorithmic performance of the GA (upper line) and ES (lower line) for Cournot models C1 (left) and C2 (right)

model, and the right-hand side graph shows that for the C2 model. The upper line represents the average market quantity obtained by the GA, and the lower line, that obtained by the ES. Further simulations with more market players show that the discrepancy in convergence of the two algorithms towards the equilibrium does not decrease. With 20 firms in the market, the ES reaches the equilibrium at a quantity level of 8.6 whereas the GA ends up at a quantity level of 10.1. In a market with 40 firms, the ES converges to a quantity level of 4.4 and the GA to a quantity level of 5.9. The results underline the fact that a premature convergence of the GA does not diminish with an increase in market players and that the average market quantity obtained by the GA is about 30 percentage points above the equilibrium quantity for the corresponding model. For the second problem set, two revenue-maximizing firms and two profitmaximizing firms are introduced into a four-seller market. The parameters of the algorithms remain the same as in the first problem set and each algorithm is executed 1000 times over 500 iterations. The results for the GA (left-hand side graph) and ES (right-hand side graph) are shown in Figure 2. Each line is associated with a firm (an individual) and represents the average quantity over 1000 runs. The results show that the ES can maintain a diverse population of two firms converging at the quantity level of 73.3 and two firms stabilizing at the quantity level of 17.6. The GA, as it is, cannot maintain such a diverse population. The third problem set tests whether the algorithms can simulate

12

Martin Prause, Juergen Weigand

Fig. 2 Evolution of quantity strategies in a four-firm market with two revenue maximizers and two profit maximizers for the GA (left) and ES (right)

Fig. 3 The left-hand side graph shows the average strategy (price) of each individual in a four-player market in Bertrand competition simulated by a GA, while the right-hand side graph shows the results obtained by ES

the outcomes of a Bertrand model, that is, whether two firms which compete on prices are enough in a market to drive the cost down to the marginal cost level. The setup for this problem set is the same as in the C1 model, except that price, and not quantity, is the independent variable. The parameters of the algorithms remain the same and each one is executed 1000 times over 500 iterations. The results are shown in Figure 3. The left-hand side graph shows the average price (strategy) of each firm (individual) simulated by the GA; it is closer to the monopoly price level than to the perfect competition price level. The right-hand side graph shows the results of the ES; here, the market price is close to the marginal cost and three out of four firms are in fierce price competition. The results obtained from the Cournot oligopoly, with different firms, marginal costs, and types of firms (revenue-maximizing and profit-maximizing firms, and the Bertrand oligopoly show that the ES is better in simulating the theoretical outcome than the GA. Specifically, the ES tends to be more robust than the GA, because it can perform better than the GA on different models without the need to adjust any parameters. The last problem set differs from the first three in two ways: (1) two decision variables are used instead of one, and (2) the results from human subject experiments based on the fourth model are different from theoretical predictions. In their human subject experiments (with two and three players per market and a marginal cost of 50 per unit), Brandts and Guillen (2007) found that the majority of experiments (12 out of 14 for the duopoly and 6 out of

Simplified Cuckoo Search

13

Table 3 Algorithm parameters for the price-quantity model Operator

Genetic Algorithm

Cuckoo Search

Representation Population size Elitism operator Crossover probability Mutation operator Mutation probability Objective function Number of iterations Number of runs

7-bit strings for p and q 1...4 Roulette wheel 0.76 Bit-flip 0.33 Profit 500 1,000

(p, q) ∈ [1...127] 1...4 Rank L´ evy 1 Profit 500 1,000

9 for the triopoly case) are dominated by collusion at the monopoly pricing level. Their results show that the markets of two or three firms, when it comes to human subject experiments, are more likely to end up in (tacit) collusive agreements. This is typically not predicted in theory, because in theory, the price mechanism dominates quantity mechanism and therefore price competition is dominant. Zhang and Brorsen (2011) extended the original experiment in the agentbased model domain by adding a fourth player, as well as by reducing the marginal cost to zero . The results they obtained are in line with those of Brandts and Guillen (2007) and match the human experiment results rather than the theoretical results. They found that markets with one or two firms set the price level close to or at the monopoly level whereas the market with four firms set the price close to the perfect competition level regardless of marginal cost. In the triopoly case with marginal cost, the price level is close to the perfect competition price level, whereas without marginal cost it is between the perfect competition and monopoly price levels. Overall, they observe an overproduction by at least 20%, in agreement with Brandts and Guillen (2007), and show that the particle-swarm algorithm can converge to the corner solutions (monopoly and perfect competition) better than the GA. This model is simulated by a GA based on Zhang and Brorsen (2011) and the ES. The algorithm parameters are outlined in Table 3. The simulation results are shown in Table 4 and 6. For each instance, the GA and ES are executed 1,000 times over 500 iterations. The aforementioned tables the columns three, four and five show the percentage of runs where the algorithms end up in monopoly pricing, perfect competition pricing, and between the monopoly and perfect competition pricing. For example, in the monopoly case, the ES converges towards the monopoly price at all times (100%) whereas the GA is indifferent. With an increase in market players, a monopoly solution is less likely. At the same time, the GA tends to converge somewhere between the monopoly and perfect competition pricing, which is in agreement with the previous results obtained from the Bertrand model. However, the ES is much more capable of reaching the corner solutions of monopoly and perfect competition pricing.

14

Martin Prause, Juergen Weigand

Table 4 GA parameters for the Cournot problem sets C1 and C2: n=number of firms, MC=marginal cost, Monopoly pricing=Monopoly pricing (M) in % Competitive pricing=Perfect competition pricing (PC) in %,Variable pricing=Pricing between (M) and (PC) in % n

MC

Monopoly pricing

Competitive pricing

Variable pricing

1 2 3 4 1 2 3 4

50 50 50 50 0 0 0 0

30.30 33.40 8.70 5.60 27.50 16.50 8.30 5.70

48.70 40.00 17.70 13.60 14.10 5.70 1.40 0.60

21.00 26.60 73.60 80.80 58.40 77.70 90.30 93.70

Table 5 ES parameters for the Cournot problem sets C1 and C2: n=number of firms, MC=marginal cost, Monopoly pricing=Monopoly pricing (M) in % Competitive pricing=Perfect competition pricing (PC) in %,Variable pricing=Pricing between (M) and (PC) in % n

MC

Monopoly pricing

Competitive pricing

Variable pricing

1 2 3 4 1 2 3 4

50 50 50 50 0 0 0 0

100 81.10 59.40 44.30 100 43.60 15.90 3.50

0 5.40 10.30 11.90 0 2.20 2.80 6.70

0 13.50 30.30 43.80 0 54.20 81.30 89.80

Analysing the individual pricing behaviour of firms for a duopoly, triopoly, and quadropoly-firm market, we underline the qualitative differences between the GA and ES. The results of the GA and ES are summarized in Table 5, which shows the average price levels with a marginal cost of zero and with a marginal cost of 50 per unit for four scenarios (1, 2, 3, and 4 firms). The results show that in each market with a marginal cost of 50 per unit, the ES maintains the monopoly pricing level in two firms while the other firms fight closer to the perfect competition pricing level. These results are in agreement with those observed in the human subject experiments. The GA is less capable of finding those corner solutions.

5 Conclusion In ACE, population-based metaheuristics such as evolutionary algorithms are used to approximate optimal solutions and equilibria and to mimic human decision making and learning. Although the field of nature-inspired metaheuristics is diverse, most of the research in this field has been conducted with focus on genetic algorithm (GAs). In this study, we took a different approach: the

Simplified Cuckoo Search

15

Table 6 Summary of average prices (over 1,000 runs) of individual firms for the pricequantity model for the GA and ES. MC=marginal cost Market

GA,MC=0

ES,MC=0

GA,MC=50

ES, MC=50

Duopoly Firm 1 Firm 2 Triopoly

43.46 79.30

64.81 92.05

60.50 82.94

90.30 98.32

Firm 1 Firm 2 Firm 3 Quadropoly-firm market

34.24 64.97 88.57

35.25 71.39 94.26

55.36 70.49 89.54

78.09 91.90 98.55

Firm Firm Firm Firm

28.20 54.61 77.26 93.35

15.27 38.35 70.24 92.77

52.84 62.62 78.56 93.43

68.35 81.83 93.94 98.94

1 2 3 4

cuckoo search (CS) algorithm proposed by Yang and Deb (2009). This algorithm is based on the brood parasite behaviour of cuckoo birds to optimize their reproduction. A conceptual analysis and comparison of the CS algorithm with a GA shows that it can be interpreted as an evolutionary strategy (ES) as defined by Beyer and Schwefel (2002) with a real-value problem representation, a specific mutation operator based on L´evy distribution with plus selection as the elitism operator. Typically, population-based metaheuristics have various parameters and their performance, quantitatively and qualitatively, depends on actual problem representation, the variation operators used, and parameter settings. This makes the economic interpretation of the search mechanics difficult. Therefore, our approach focuses on obtaining the same results as theory predicts rather than mimicking the actual learning behaviour. In order to render the algorithms more robust and their results less independent of the parameter settings, we take the CS approach and construct a simplified ES by bringing the parameters down to two: population size and mutation operator. In the experimental setup, we test the ES by comparing it with a GA on various Cournot and Bertrand models by taking the social learning approach. Here, each player (firm) in the presented model is represented by one individual of the population. The results show that the ES stabilizes in all theoretical equilibria of the standard Cournot and Bertrand models and reproduces the results of human subject experiments in a simultaneous price and quantity setting. The ES performs qualitatively better than specific GAs without adjusting any of its parameters. This makes it a robust metaheuristic to be considered further in ACE and applied to a wider range of economic models. Further research may focus on the effectiveness (convergence behaviour) of the mutation operator based on Lvy distribution in comparison with the pure random walk presented in the original ES.

16

Martin Prause, Juergen Weigand

6 Appendix 6.1 Appendix A - List of the genetic algorithm parameters used in ACE for duopoly and oligopoly models

Model Representation Population size Elitism Crossover Crossover prob. Mutation Mutation prob.

Model Representation Population size Elitism Crossover Crossover prob. Mutation Mutation prob.

[3]

[11]

[31]

[2]

Cournot 30 bits 30 Roulette One-Point 0.3 to 0.6 Bit flip 0.0033 to 0.033

Cobweb 10 bits 1000 Roulette One-Point 1 Bit flip 0.001

Cournot 11 bits Variable Variable One-Point 0.95 Bit flip 0.001

Cournot n/a # of firms Roulette One-Point 1 Bit flip 0.01

[23]

[36]

[32]

[30]

Cournot Binary/Gray 100 Roulette One-Point 0 to 0.6 Bit flip 0 to 0.025

Bertrand Vector 100 Rank One-Point 0.76 Bit flip 0.33

Cournot Binary/Real Variable Variable One-Point 0 or 1 Variable 0.001

Cournot n/a Variable Roulette Imitation 0.005 Bit flip 0 to 0.1

6.2 Appendix B - Algorithm of the price and quantity oligopoly model by Brandts and Guillen (2007) to determine the sales quantity

Simplified Cuckoo Search

17

Algorithm 4 Algorithm of the price and quantity oligopoly model P Q= n i=1 qi Let S be an ordered list of the firms regarding to price (ascending) d = qmax while d > 0 do if Q ≤ d then Each firm i sells its produced quantity qi at its offered price pi d=0 else Determine P k firms in S with the lowest same price Qs = kj=1 qj if Qs ≥ d then Each firm j = 1...k gets a share qis = qj /QS · d of the demand d=0 else Each firm j = 1...k sells its produced quantity qi Remove k firms from S d = d − QS end if end if end while

References 1. Alkemade, F., Poutr, H.L., Amman, H.: Robust evolutionary algorithm design for socioeconomic simulation: A correction. Computational Economics 33(1), 99–101 (2009) 2. Alkemade, F., Poutr, H.L., Amman, H.M.: Robust evolutionary algorithm design for socio-economic simulation. Computational Economics 28(4), 355–370 (2006) 3. Arifovic, J.: Genetic algorithm learning and the cobweb model. Journal of Economic Dynamics and Control 18(1), 3–28 (1994) 4. Arifovic, J., Maschek, M.K.: Revisiting individual evolutionary learning in the cobweb model an illustration of the virtual spite-effect. Computational Economics 28(4), 333– 354 (2006) 5. Beyer, Schwefel: Evolution strategies: A comprehensive introduction. Natural Computing pp. 3–52 (2002) 6. Bezdek, J.: What is computational intelligence?, pp. 1–12. IEEE Press (1994) 7. Birchenhall, C., Kastrinos, N., Metcalfe, S.: Genetic algorithms in evolutionary modelling. Journal of Evolutionary Economics 7, 375–393 (1997) 8. Blum, C., Roli, A.: Metaheuristics in combinatorial optimization: Overview and conceptual comparision. ACM Computing Surveys 35(3), 268–308 (2003) 9. Brandts, J., Guillen, P.: Collusion and fights in an experiment with price-setting firms and production in advance. The Journal of Industrial Economics 55(3), 453–473 (2007) 10. Chen, S.H.: Varieties of agents in agent-based computational economics: A historical and an interdisciplinary perspective. Journal of Economic Dynamics and Control 36(1), 1–25 (2012) 11. Dawid, H., Kopel, M.: On economic applications of the genetic algorithm a model of the cobweb type. Journal of Evolutionary Economics 8, 297–315 (1998) 12. Duffy, J.: Agent-Based Models and Human Subject Experiments, vol. 2, pp. 949–1011. Elsevier (2006) 13. Fogel, D.B.: Asymptotic convergence properties of genetic algorithms and evolutionary programming: Analysis and experiments. Cybernetics and Systems 25(3), 389–407 (1994) 14. Geisendorf, S.: The influence of innovation and imitaion on economic performance. Economic Issues 14(1), 65–94 (2009) 15. Geisendorf, S.: Internal selection and market selection in economic genetic algorithms. Journal of Evolutionary Economics 21(5), 817–841 (2011)

18

Martin Prause, Juergen Weigand

16. Goldberg, D.: Genetic Algorithms in Search, Optimization and Machine Learning. Reading. Addison-Wesley Professional (1989) 17. Haruvy, E., Roth, A.E., Unver, U.: The dynamics of law clerk matching: An experimental and computational investigation of proposals for reform of the market. Journal of Economic Dynamics and Control 30(3), 457–486 (2006) 18. Hoffmeister, Baeck: Genetic algorithms and evolution strategies: Similarities and differences. Lecture Notes in Computer Science 496: Parallel Problem Solving from Nature pp. 455–469 (1991) 19. Honza, M., Taborsky, B., Taborsky, M., Teuschl, Y., Vogl, W., Moksnes, A., Rskaft, E.: Behaviour of female common cuckoos, cuculus canorus, in the vicinity of host nests before and during egg laying: a radiotelemetry study. Animal Behaviour 64(6), 861–868 (2002) 20. Koza, J.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press (1992) 21. Koziel, S., Yang, X.S. (eds.): Computational Optimization, Methods and Algorithms. Springer-Verlag Berlin Heidelberg (2011) 22. Kreps, D.M., Scheinkman, J.A.: Quantity precommitment and bertrand competition yield cournot outcomes. The Bell Journal of Economics 14(2), 326–337 (1983) 23. Maschek, M.K.: Intelligent mutation rate control in an economic application of genetic algorithms. Computational Economics 35(1), 25–49 (2010) 24. Pavlyukevich, I.: Lvy flights, non-local search and simulated annealing. Journal of Computational Physics 226, 1830–1844 (2007) 25. Payne, R.B.: The Cuckoos. Oxford University Press (2005) 26. Rechenberg, I.: Evolutionsstrategie ’94. Fromman-Holzboog (1994) 27. Schwefel, H.P., Wegener, I., Weinert, K.: Advances in Computational Intelligence - Theory and Practice. Springer Verlag GmbH (2002) 28. Tesfatsion, L.: Agent-based computational economics: modeling economies as complex adaptive systems. Information Sciences 149(4), 262–268 (2003) 29. Valle, T., Yldzolu, M.: Convergence in the finite cournot oligopoly with social and individual learning. Journal of Economic Behavior & Organization 72(2), 670–690 (2009) 30. Valle, T., Yldzolu, M.: Can they beat the cournot equilibrium? learning with memory and convergence to equilibria in a cournot oligopoly. Computational Economics 41(4), 493–516 (2013) 31. Vriend, N.J.: An illustration of the essential difference between individual and social learning, and its consequences for computational analyses. Journal of Economic Dynamics and Control 24(1), 1–19 (2000) 32. Waltman, L., Eck, N.J., Dekker, R., Kaymak, U.: Economic modeling using evolutionary algorithms: the effect of a binary encoding of strategies. Journal of Evolutionary Economics 21(5), 737–756 (2011) 33. Wolpert, D.H., Macready, W.G.: No free lunch theorems for optimization. IEEE Transactions on Evolutionary Computation 1(1), 67–82 (1997) 34. Yang, X.S., Deb, S.: Cuckoo search via levy flights. Proceedings of World Congress on Nature & Biologically Inspired Computing pp. 210–214 (2009) 35. Yang, X.S., Deb, S.: Engineering optimization by cuckoo search. International Journal of Mathematical Modelling and Numerical Optimisation 1(4), 330–343 (2010) 36. Zhang, T., Brorsen, B.W.: Oligopoly firms with quantity-price strategic decisions. Journal of Economic Interaction and Coordination 6(2), 157–170 (2011)