An evolutionary algorithm for fractal coding of

0 downloads 0 Views 214KB Size Report
Index Terms—Fractal compression, fractal encoding, iterated function systems. ...... [2] M. F. Barnsley, Fractals Everywhere. San Diego, CA: Academic,. 1988.
172

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

An Evolutionary Algorithm for Fractal Coding of Binary Images Dipankar Dasgupta, German Hernandez, and Fernando Niño

Abstract—An evolutionary algorithm is used to search for iterated function systems (IFS) that can encode black and white images. As the number of maps of the IFS that encodes an image cannot be known in advance, a variable-length genotype is used to represent candidate solutions. Accordingly, feasibility conditions of the maps are introduced, and special genetic operators that maintain and control their feasibility are defined. In addition, several similarity measures are used to define different fitness functions for experimentation. The performance of the proposed methods is tested on a set of binary images, and experimental results are reported. Index Terms—Fractal compression, fractal encoding, iterated function systems.

I. INTRODUCTION

I

MAGE compression is one of the most active fields of research because it can reduce the costs of storage and transmission of images over the Internet. Image or data compression [19], in general, is an optimization problem where the aim is to find the shortest description of some given data that satisfies certain quality constraints. Fractal image compression [10], based on fractal geometry, has been found to be one of the most promising compression techniques [1], [3]. In fractal compression [2], an “object” is viewed as the attractor of (iterating) a collection of contractive maps defined over a complete metric space. Several approaches have been used to search for iterated function systems (IFS) to encode images, or more exactly, to search for the parameters that define such maps. These methods include the use of discrete cosine transforms [27], wavelet transforms [25], [26], heuristics [23], and partitioned IFS [10]. Evolutionary algorithms have also been used to perform fractal image compression [5], [11], [15], [18], [21]. For example, a genetic algorithm was used to evolve fixed-size one-dimensional affine iterated function systems [5]. More recently, a genetic algorithm was used to find partitions which encode gray-scale images using partitioned iterated function systems [21]. In this paper, the effectiveness of an evolutionary algorithm is investigated by incorporating some dynamical system strategies to find the maps of IFS’s that can encode black and white (BW) images. This evolutionary algorithm uses

Manuscript received June 8, 1998; revised March 16, 1999 and November 29, 1999. The work of D. Dasgupta was supported by the Office of Naval Research under Grant N00014-99-1-0721. The work of G. Hernandez and F. Niño was supported in part by a grant from COLCIENCIAS, Colombia. D. Dasgupta is with the Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152 USA (e-mail: [email protected]). G. Hernandez and F. Niño are with the Universidad Nacional de Colombia, Colombia, and the Department of Mathematical Sciences, University of Memphis, Memphis, TN 38152 USA. Publisher Item Identifier S 1089-778X(00)04474-X.

a variable-length genotype representation, i.e., each IFS is represented as a list of maps, and a map is represented as a set of real parameters. Special genetic operators that maintain and control the feasibility of the individuals in the population are adopted. A fitness function is defined that measures the similarity between the attractor and the image that penalizes a large number of maps and high contractivity factors. The rest of the paper is organized as follows. In Section II, some principles of fractal image compression are reviewed. In Section III, the problem space, feasibility conditions of solutions, and the objective function are presented. Section IV discusses the elements of an evolutionary algorithm: the control parameters, the genetic representation, the fitness function, and the special genetic operators, along with some implementation details. Section V reports some experimental results, and the last section addresses the performance of this evolutionary approach. II. FRACTAL IMAGE COMPRESSION An IFS is a set of maps that act on a metric space [16]. The conditions on the space and the set of maps may vary in different approaches [22]. For this work, we assumed that the space is complete, and the set of maps is a finite collection of contractive transformations [2]. Accordingly, an IFS can be denoted as (1) where

is a complete metric space with metric , denoted by , and each is a contractive map, i.e., with

Here,

is called the contractivity factor of

and

is called the contractivity factor of the IFS. , “the Based on the space , the metric space is the set of nonempty space of fractals,” is defined where compact subsets of , and is the Hausdorff metric on [2]. Given , the Hausdorff metric is defined as

where

with as the metric on . is a complete metric space, It can be proved that, if is also a complete metric space [2]. then

1089–778X/00$10.00 © 2000 IEEE

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

An IFS defines a deterministic discrete dynamical system with the state space and the dynamics , and can be defined as for all

173

image by means of an IFS, such an IFS applied to should produce a set “similar” to . Collage Theorem [2]: Let be an IFS with contractivity . factor , as in (1). Let be any nonempty compact set in Then (2)

The dynamics is called the Hutchinson operator, which is a contractive transformation on the complete metric space . be the th composition of , for Now, let etc. Then the sequence of sets can be obover iteratively as follows: tained by applying

where and is called the orbit . [7] of in the dynamical system According to Banach’s fixed-point theorem [2], a contractive transformation in a complete metric space has one and only one fixed point, and every orbit converges to such a fixed point. In other words, given a transformation (in this case, the IFS that , that defines ), there exists a compact subset of is a fixed point of , i.e.,

In addition, for every , i.e.,

, the orbit of

converges to

where is called the attractor of the dynamical system defined by the IFS or a fractal generated by the IFS. In practice, the chaos game [2] is used to generate the fractals associated with an IFS. In the chaos game, a such that probability vector and is assigned to a set of maps . The IFS with the probability vector defines in the following way. It starts with an a random walk in in . Choose integers with arbitrary point , the probability that is equal to . A random walk (orbit) is defined as

Elton’s ergodic theorem [8] establishes that, for this random , the set of walk starting at a point , i.e., for points that form the random walk converges to the attractor with the Hausdorff metric in the following way:

where denotes the closure of . In this paper, the chaos game approach will be used to generate the fractals. The problem in fractal compression consists of finding an IFS whose attractor approximates a given compact set . The method to solve this problem is based on Barnsley’s collage theorem, which states that, if one wants to encode an

with as the attractor of . Therefore, the problem of finding an IFS whose attractor encodes can be solved by minimizing the expression on the right-hand side of (2), i.e., by finding an IFS such that the Hausand is minimized, and the dorff distance between factor of contractivity is also minimized simultaneously. By doing this, the distance between the attractor and the set is minimized. Additionally, it is important to minimize the number of maps in the IFS because, in practice, it will determine the amount of storage space needed. The space that will be used in the fractal encoding of binary images is defined next. A. Space of Binary Images A continuous binary image may be considered as a function , where . Fig. 1 shows a binary image in which 0 corresponds to white and 1 to black. The image is the indicator function of a set . Accordingly, in Fig. 1, is the set of black points that form a cat. The paper deals with discrete binary images, which are approximations of continuous binary images. Such images take values in the points of a regular mesh on , with and positive integers, and for . The points on the mesh are called pixels. Formally, a discrete binary , and image is a function of the form image. Alternatively, it can be considered is called an as the set

i.e., the set of black pixels in the image. In the continuous and discrete case, if is different from is a compact subset of . the constant function 0, the set Therefore, the problem of encoding a binary image consists whose attractor is . of finding an IFS defined on III. PROBLEM SPACE In this paper, the maps included in the IFS are constrained to be affine contractive maps on because it is easy to determine whether or not an affine map is contractive. is defined as An affine map on (3) and . with Therefore, an affine map in an IFS must satisfy the following constraints in order to be a feasible map.

174

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

measures and their relation to the Hausdorff distance are discussed next. B. Similarity Measures Intuitively, a similarity measure

Fig. 1.

BW image of a cat.

1)

must be defined on , i.e., , which implies the following constraints over the parameters , and : because a) b) because c) because d) because . must be contractive; in general, an affine function is 2) contractive if and only if its linear part is contractive, i.e., if the norm of the eigenvalues of the matrix that defines the linear part of the affine transformation is less than 1. is the set of IFS’s composed of Then, the search space affine contractive maps on . This space is infinitely uncountable, which implies that all of the possible IFS’s cannot be examined exhaustively. Since the problem of finding an optimal fractal encoding has been proved to be NP hard [19], an evolutionary algorithm seems suitable to search for a solution to this problem. A. Objective Function Given an image , the value of the function that measures encodes the how well an IFS, image can be determined by two main factors. 1) Similarity between the attractor of and the image . One way of determining the similarity is by the Hausdorff distance between the attractor of and the image , and will be defined later. 2) Compression ratio, which is the quotient between the amount of storage space necessary to store the original image , and the amount of storage space necessary to is store the IFS . In this case, the compression ratio , where is the number of maps in defined as and is the amount of space needed to store a contractive affine map. However, in fractal compression, the above-mentioned ratio is not a good measure of compression because, once an image is encoded, it can be reproduced at any scale, which makes the compression rate variable for the same input image. For that in the IFS reason, we will consider the number of maps as the compression factor. In addition, since the calculation of the Hausdorff distance is computationally expensive, other similarity measures that are computationally less expensive will be used. These similarity

is a function

such that, for two binary images and approaches is close to 0 1 when the images are very similar, and when the images are very different. Four similarity measures for binary images are used to define different fitness functions for the fractal encoding problem. : Let and be two binary 1) Hausdorff Similarity between and is deimages. The Hausdorff similarity fined as

with the Hausdorff distance. The second term on the right-hand side is the normalization of the Hausdorff distance between [0, 1]. This measure is related to the human vision notion of similarity between images [20], and it can be applied to both continuous and discrete binary images. : Let and be two contin2) Hamming Similarity between uous binary images. The Hamming similarity and is defined as

where and are the indicator functions of and , rebinary images, the spectively. If and are discrete is defined as Hamming similarity

Here, is close to 1 if and only if and differ in a few pixels. When this similarity is close to 1, the two images are similar in the sense of the Hausdorff distance. : Given two continuous bi3) Intersection Similarity nary images and , the intersection similarity is defined as area of area of On the other hand, if is defined as

where image.

and

are discrete

binary images,

gives the number of black points of a discrete binary

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

175

4) Difference Similarity : Given two continuous binary is defined as images and , the difference similarity area of area of with the symmetric difference between sets. If and binary images, then is defined as discrete

are

Fig. 2. Each node w of the list represents an affine contractive map of an IFS. Each map is represented by its six real parameters a; b; c; d; e; and f [see (3)].

The last three similarity measures have the advantage of ; it is linear in having a worst case time complexity the number of pixels, which makes them a feasible choice. IV. ELEMENTS OF THE EVOLUTIONARY ALGORITHM A. Search Space Representation The first step to implement the evolutionary algorithm is to define a representation of the search space, i.e., the way every possible solution in the search space will be represented. Each IFS [as in (1)] will be represented as a dynamic linked list, as shown in Fig. 2. This representation allows handling a variable amount of maps by adding or removing nodes to the list. is represented by its six real Each contractive affine map and [see (3)] as shown in Fig. 2. parameters

Fig. 3. Function P used to penalize the values of the contractivity factor  . The graphs of P for  0:1; 0:3; and 0:5 are shown.

=

Next, the penalization function such that

for

is defined as

B. Fitness Function Given a binary image , the goal of the evolutionary algo[given in (1)], with a minimum rithm is to find an IFS number of maps , that renders the image with good quality. In this case, the quality is determined by the similarity between the attractor of and the image . If is a good approximation of the Hausdorff similarity measure , then, due to the collage theorem, the attractor of [see is close to 1 (1)] and the image are similar if and is very small [as in (2)]. Therefore, based on this approximation, the problem has become a multiobjective optimization problem: 1) maximize 2) minimize 3) minimize . In order to define an appropriate fitness measure, the funcand are introduced to penalize the values of and tions , respectively. First, the function that penalizes the values of the contractivity factor is defined as such that

where a parameter that specifies the form of the curve. Fig. 3 for and . shows the graph of This function “rewards” small values of and highly “penalizes” those values of close to 1. The value is an expected value of that determines the point at which the strong penalization begins.

In this case, the parameter determines the form of the curve. for and . Fig. 4 shows the graph of “rewards” values of that are less than , Similar to and highly “penalizes” those values of that are much greater than . As before, the value is an expected value of , and determines the point at which the strong penalization begins. Using both penalization functions defined above, the fitness , for a given image , is defined as function such that

for all , and is one of the similarity measures or defined above. is defined in such a way Accordingly, the fitness function that it measures the similarity between the image and the approximation given by , highly penalizes when its contractivity factor is close to 1, and highly penalizes if it has a large number of maps . C. Evolutionary Operators Considering the special properties of the problem, some genetic operators are designed and used in combination with the standard operators in order to improve the effectiveness of the evolutionary algorithm. These operators include random feasible initialization, steady-state selection, IFS crossover, global

176

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

Fig. 4. Function R used to penalize the values of k , the size of the IFS. The 2 and 3 are shown. graphs of R for 

=

mutation, IFS mutation, map transformation, and map perturbation. 1) Random Feasible Initialization: In order to generate the initial population, the algorithm receives two parameters pop_size and , where pop_size is the number of individuals in the population, and is the maximum expected number of maps in any IFS. Each IFS in the initial population is a feasible map with or fewer maps generated at random. The following procedural steps are used to generate feasible maps: 1) 2) 3) 4) 5)

Generate and at random in [0, 1]. Generate and at random in then make If Generate and at random in then make If

. and . and

. .

Notice that the equations used to generate the values of and are similar to the equations used to generate the values of and . The maps generated through this procedure satisfy the feasibility conditions for the maps defined in Section III under items 1) and 2). 2) Steady-State Approach: We considered an overlapping population along with the fitness proportionate selection mechanism for reproduction. The choice of this selection scheme is arbitary, other selection schemes can also be used for the problem. A number (gap) of the best individuals in the , population are passed to next generation directly. If gap the fitness of the best individual evolved in the population will be nondecreasing in time. 3) IFS Crossover: The IFS crossover operator receives a pathat defines the probability of crossover at the IFS rameter level. This operator uses to choose two IFS in the population out of the generational gap, and applies single-point crossover to the dynamic lists based on cut-and-splice operators (as shown in Fig. 5). The offspring, which replace the parents, are still feasible solutions in the population because the maps that form each of the offspring are still contractive affine maps. 4) Mutation: An individual IFS in the population, out of the . generational gap, is selected to be mutated with probability If a specific IFS was chosen to be mutated, then the operator needs to decide what kind of mutation to apply, either an IFS mutation or a map mutation. The IFS mutation is applied with , therefore, the map mutation is applied with probability

Fig. 5. The IFS crossover operator is based on cut-and-splice operators. The two parent lists are cut at the crossover points, and the tail of each list is spliced to the head of the list of the respective mate.

probability . These mutation operators are explained next. 5) IFS Mutation: The IFS mutation operator modifies an IFS by either removing a map from an IFS or inserting a map into an IFS. The probability of removing a map given that an IFS was chosen to be mutated is 1/2. This probability is used in order to add and remove maps in the same proportion, and keep the length of the maps close to the average length of the maps in the initial population. In other words, this helps to prevent uncontrolled growth in the length of the IFS. As in IFS crossover, this operator does not affect the feasibility of the IFS in the population. 6) Map Mutations: The map mutation operators are applied over the parameters of the maps. Two types of map mutations, map transformation and map perturbation, are applied with and , respectively. probability 1) A map transformation of a map consists of generating an affine map and composing . The it with , i.e., the mutated map is equal to map is of one of the four specific types of affine maps: rotation, scaling, skew, and translation. First, the mutation operator selects at random one type of affine map. Each type of map transformation may be selected with the same probability 1/4. Specifically, the affine maps used to mutate a particular contractive affine map are explained next. In the four cases, each affine map can be produced by randomly generating only one parameter. a) A rotation map is defined by the matrix , and it is produced by randomly generating an arbitrary value of

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

.

The

mutated

map

is

then

given

by

. b) There are two kinds of scaling maps. Scaling maps of each class can be generated with the same probability 1/2. A scaling map in the first class defines a scaling map in the coordinate, and it with an arbitrary is given by a matrix number generated randomly. Similarly, a map in the second class of scaling maps is defined by a , which defines a scaling map on matrix the coordinate. Then, the mutated map is given or , by depending on in which coordinate the scaling is applied. c) Similar to scaling maps, there are two classes of skew maps which are defined by matrices of and , respectively. Each the form skew map is determined by only one parameter distributed uniformly in [0, 1]. Then, the mutated or map is given by , depending on in which coordinate the skew is applied. d) Translation can be applied in two ways: in the or the coordinate. As in the two previous cases, first the type of translation is decided at random, and then a random parameter will determine a vector or , depending on the type of the form of translation. The mutated map is then given by or , respectively. The translation value is distributed uni. formly in These map transformations usually do not affect the feasibility of the maps, and introduce mutations that explore the search space in a way that is related to the goal. 2) A map perturbation consists of modifying the value of one of the parameters of the map. The parameters are uniformly mutated, i.e., the probability of selecting a specific parameter to be mutated is 1/6. If is the parameter chosen to be mutated, then the new value of is given by

with the current value of the IFS that contains is the fitness the map being mutated currently, value of the IFS , and is a real number generated at . If the fitrandom with uniform distribution in is close to 1, which means that the atness value tractor of is a good approximation of , then the map is mutated slightly. Therefore, this mutation operator is a self-adaptive operator. The reason to introduce this mutation is the continuous dependency on the parameters [2] of the Hausdorff distance between the attractor of an IFS and the image to encode. 7) Repair Procedure: Even though the mutation operator is designed to minimize the possibility of affecting the feasibility of a solution, in some particular cases, it can produce nonfeasible solutions. Therefore, after applying mutation, a repairing

177

procedure is performed to guarantee the feasibility of the solutions. The repairing procedure checks the values of the parameters that define a map in the same order as in the random feasible initialization explained in Section IV-C-1, and changes them when necessary. D. Proposed Evolutionary Approach Let be the population in the generation . The specific evolutionary approach is presented next.

Random initialization( pop_size) Evaluate fitness max fitness while (max fitness target fitness or Select ( , gap) IFS crossover Mutation Repair max fitness Evaluate fitness

generation) do

end-while The procedure IFS crossover selects an IFS with probability and crosses it with another IFS. Only IFS’s out of the gap are allowed to cross. This procedure is as follows. IFS crossover for to pop_size do pick to be crossed with probability if is picked then and cross with . pick , The procedure mutation selects an IFS with probability and applies an IFS mutation, map transformation, or map perand . Only turbation, depending on the probabilities IFS’s out of the gap are allowed to get mutated. mutation for to pop_size do pick to be mutated with probability select IFS mutation or map mutation with probability if IFS mutation then insert or remove a map with probability if map mutation then apply map transformation or map perturbation with probability end-for V. EXPERIMENTAL RESULTS Two images of size 64 64 pixels and one image of size 128 128 pixels were used in the experiments. Fig. 6 shows these three source images: a cat, a star, and Sierpinski’s triangle.

178

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

2

Fig. 6. Images used in the experiments. (a) 64 64 image of a cat. (b) 64 64 image of a star. (c) 128 128 image of Sierpinski’s triangle.

2

2

Three of the four similarity measures were tested; the Hausdorff measure was abandoned because of its high computational cost. The evolutionary algorithm was capable of finding good encodings for the source images using the Hamming, intersection, and difference similarity measures. Fig. 7 shows the collage images (left) and attractor images (right) obtained for the and image of the cat using the similarity measures [shown in Fig. 7(a)–(c), respectively] for particular runs of the was the similarity EA after 10 000 generations. In general, measure that performed best in all of the experiments. The im[as in Fig. 7(b)] are better approximaages obtained using tions to the image of the cat than the images shown in Fig. 7(a) and , respectively. and (c) obtained using Some results of the experiments using the similarity measure to encode the images appear in Fig. 6. Table I shows the parameters used for different experiments. All of the experiments , between shown here used a high probability of mutation 0.7 and 0.9, to avoid premature convergence. However, the disruptive effect of the high probability of mutation is compensated by the gap in the selection mechanism. The combination of high probability of mutation and around 10% of the population gap is shown to be the most effective combination of parameters. The value of probability of crossover did not show a strong effect on was used in the range between the performance of the EA; 0.2 and 0.9. Using parameter values within this range, the EA showed an effectivity of more that 95% to find a good encoding generations. in Table II shows the results for the encodings of these three images, for the best IFS found by the evolutionary algorithm in three trials and after 10 000 generations. The results obtained in the three trials were very similar, which implies a high reliability of the evolutionary search. In these exwas used. The IFS periments, the intersection similarity and fitness was obtained by multiplying the similarity and for the the values of the penalization functions IFS. In this case, the time complexity of the calculation of the fitness function is linear in the number of pixels, which

Fig. 7. The images on the left-hand side are the collage images found by the EA for the image of the cat, using different similarity measures. On the right, the corresponding attractors generated by the chaos game are shown. (a) Collage image and attractor image using the Hamming similarity measure S . (b) Collage image and attractor image using the intersection similarity measure S . (c) Collage image and attractor image using the difference similarity measure S .

makes the evolutionary search feasible. Fig. 8 shows a typical run, which exhibits gradual improvement in the best individual fitness during the test run. Fig. 9 shows the iterated function systems found by the EA for each image. The IFS that encodes the image of the cat, which is formed by 20 contractive affine maps, is presented in Table III. The first column corresponds to the number of the map, and the other six , and ) correspond to the parameters that columns ( define each map. The original image, the collage image, and the attractor image of the cat are shown in Fig. 10. The attractor image is produced by the chaos game using the IFS found by the EA. As can be seen, the collage image is very similar to the attractor image computed by the chaos game. In order to show the independence of scale of the fractal encoding, the attractor image of the cat is restored at a 128 128 pixel resolution, and is shown in Fig. 11. As mentioned above, the IFS that encodes the cat image can be used to restore the image at any resolution. The original image, the collage image, and the attractor image for the star Sierpinski’s triangle are shown in Figs. 12 and 13, respectively. In order to show the independence of scale of the fractal encoding, the image of the star was also restored at a 128 128 pixel resolution, as shown in Fig. 14. VI. DISCUSSION The paper describes a variable-length evolutionary algorithm for evolving IFS’s to encode binary images. This approach is mainly based on the collage theorem (mentioned in Section II) which was used to define a general objective function. In practice, the use of the Hausdorff similarity [as in Section III-B-1] is unfeasible because its complexity is

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

179

TABLE I PARAMETERS FOR THE EVOLUTIONARY ALGORITHM

pop_size is the number of individuals in the population. The p is the probability of IFS crossover; p is the probability of mutation,  is the maximum “expected value” of the contractivity factor,  is the maximum “expected value,” p is the probability of mutating an IFS, and p is the probability of mutating a map in an IFS that has been selected to get mutated. TABLE II RESULTS FOR THE BEST IFS FOUND BY THE EVOLUTIONARY ALGORITHM IN THREE TRIALS AND AFTER 10 000 GENERATIONS

The results obtained in the tree trials were very similar, which implies high reliability of the evolutionary search. In these experiments, s , the intersection similarity, was used. The IFS fitness is obtained by multiplying the similarity s and the values of the penalization functions P and R for the IFS.

Fig. 8. The graph shows the values of the fitness of the best solution in each generation for one of the experiments with the cat image as shown in Figs. 9 and 10 and Table III.

. We introduced three other similarity measures (Hamming, intersection, and difference similarities) in this work. Our experimental results showed that the evolutionary algorithm performed well with all three similarity measures for the images used. However, the best performance could be achieved with the intersection similarity measure. This similarity measure has the additional advantage of having linear time complexity. Moreover, since the EA used a variable-length genotype, different fitness measures are included to penalize IFS’s with a large number of maps, and thus control their growth. We also used specific properties of the problem to design some special genetic operators to improve the performance of the evolutionary algorithm. These operators include random feasible initialization, IFS crossover, global mutation, IFS mutation, map transformation, and map perturbation. Additionally, after ap-

Fig. 9. Iterated function systems found by the EA. (a) IFS that encodes the image of the cat. (b) IFS that encodes the image of the star. (c) IFS that encodes Sierpinski’s triangle.

plying mutation, a repairing procedure is performed to guarantee the feasibility of the solutions. The results obtained for encoding images with IFS’s are very encouraging. However, further improvement may be possible by using IFS’s that also include nonlinear functions as investigated in [12]. Another possible direction of future research is to use extensions of the collage theorem such as the one presented in [13], which seems to produce better bounds of the similarity between the collage image and the attractor. It is to be noted that the approach considered in this paper encodes images as independent of scale, which makes it difficult to compare (in a meaningful way) with the other compression methods which are, in general, scale dependent.

180

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

TABLE III IFS THAT ENCODES THE IMAGE OF THE CAT, FOUND BY THE EVOLUTIONARY ALGORITHM WITH THE PARAMETERS SHOWN IN TABLE II

The IFS is formed by 20 contractive affine maps. Each affine map is defined by the parameters a; b; c; d; e; and f .

Fig. 10. Results for the encoding of the cat. (a) Original image. (b) Collage image found by the EA. (c) Attractor image generated by the chaos game for the IFS found by the EA.

Fig. 11. Attractor for the image of the cat, generated by the chaos game at a 128 128 pixel resolution.

2

Fig. 12. Results for the encoding of the image of Sierpinki’s triangle. (a) Original image. (b) Collage image found by the EA. (c) Attractor image generated by the chaos game for the IFS found by the EA.

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 4, NO. 2, JULY 2000

Fig. 13. Results for the encoding of the star image. (a) Original image. (b) Collage image found by the EA. (c) Attractor image generated by the chaos game for the IFS found by the EA.

Fig. 14. Attractor for the image of the star. This image was generated by the chaos game at a 128 128 pixel resolution.

2

ACKNOWLEDGMENT The authors would like to thank A. Quas, F. Botelho, D. B. Fogel, and anonymous reviewers for their comments on the earlier version of the paper. REFERENCES [1] L. F. Anson, “Fractal image compression,” Byte, Oct. 1993. [2] M. F. Barnsley, Fractals Everywhere. San Diego, CA: Academic, 1988. [3] M. F. Barnsley and A. Sloan, “A better way to compress images,” Byte, Jan. 1988. [4] C. Bandt, S. Graf, and M. Zahle, Fractal Geometry and Stochastics: Birkhauser Verlag, 1985. [5] M. Coli, P. Palazzari, and A. Viola, “Image fractal coding through genetic algorithms,” in Proc. IEEE Workshop Nonlinear Signal and Image Processing, 1995. [6] D. Dasgupta and Z. Michalewicz, Eds., Evolutionary Algorithms in Engineering Applications. Berlin: Springer-Verlag, 1997. [7] R. Devaney, An Introduction to Chaotic Dynamical Systems. Reading, MA: Addison-Wesley, 1989.

181

[8] J. Elton, “An ergodic theorem for iterated maps,” J. Ergodic Theory Dyn. Syst., vol. 7, pp. 481–488, 1987. [9] D. Fogel, Evolutionary Computation. Toward a New Philosophy of Machine Intelligence. New York: IEEE Press, 1995. [10] Y. Fisher, Ed., Fractal Image Compression: Theory and Applications. Berlin: Springer-Verlag, 1995. [11] B. Goertzel, H. Miyamoto, and Y. Awata, “Fractal image compression with genetic algorithms,” in Complexity International, 1994, vol. 1. [12] E. Gröller, “Modeling and rendering of nonlinear iterated function systems,” Comput. Graphics, vol. 18, no. 5, 1994. [13] H. Honda, M. Haseyama, H. Kitajima, and S. Matsumoto, “Extension of the collage theorem,” in Proc. ICIP-97 (IEEE Int. Conf. Image Processing), Santa Barbara, CA, Oct. 1997. [14] G. Hernandez, G. Narasimhan, and F. Niño, “Evolutionary set matching,” in Smart Engineering Systems: Neural Networks, Fuzzy Logic, Evolutionary Programming, and Rough Sets, C. H. Dagli et al., Eds: ASME Press, 1998, vol. 8. [15] D. A. Hoskins, “An iterated function systems approach to emergence,” in Evolutionary Programming IV: Proc. 4th Annu. Conf. Evol. Prog., J. R. McDonnell, R. G. Reynolds, and D. B. Fogel, Eds. Cambridge, MA: MIT Press, 1995. [16] J. E. Hutchinson, “Fractals and self-similarity,” Indiana Univ. Math. J., vol. 30, no. 5, 1981. [17] S. K. Mitra, C. A. Murthy, and M. K. Kundu, “Technique for fractal image compression using genetic algorithm,” IEEE Trans. Image Processing, vol. 7, pp. 586–592, Apr. 1998. [18] D. J. Netleton and R. Garigliano, “Evolutionary algorithms and a fractal inverse problem,” Biosyst., no. 33, pp. 221–231, 1994. [19] M. Ruhl and H. Hartenstein, “Optimal fractal encoding is NP-hard,” in Proc. DCC’97 Data Compression Conf., J. A. Stroer and M. Cohn, Eds: IEEE Computer Society Press, 1997. [20] W. J. Rucklidge, “Efficient computation of the minimum Hausdorff distance for visual recognition,” Ph.D. dissertation, Cornell Univ., Ithaca, NY, Jan. 1995. [21] D. Saupe and M. Ruhl, “Evolutionary fractal image compression,” in Proc. IEEE Int. Conf. Image Processing, vol. 1, Sept. 1996. [22] K. Shelton, An introduction to iterated function systems, preprint, Univ. North Carolina, 1996. [23] J. Signes, “Geometrical interpretation of IFS based image coding,” in Proc. NATO ASI Fractal Image Encoding and Analysis, Norway, July 1995. [24] L. Vences and I. Rudomin, “Fractal compression of single images and image sequences using genetic algorithms,” Tech. Rep., Instit. Technol., Univ. Monterrey, 1994. [25] A. Van de Walle, “Merging fractal image compression and wavelet transform methods,” in Proc. NATO ASI Fractal Image Compression and Analysis, Norway, July 1995. [26] P. D. Wakefield, D. M. Bethel, and D. M. Monro, “Hybrid image compression with implicit fractal terms,” in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP), vol. 4, Germany, Apr. 1997. [27] B. E. Wohlberg and G. de Jager, “Fast image domain fractal compression by DCT domain block matching,” Electron. Lett., vol. 31, pp. 869–870, May 1995.