Template matching using an improved ... - Semantic Scholar

26 downloads 21882 Views 3MB Size Report
e-mail: [email protected] ... As a result, the new EMO algorithm can substantially ... Keywords Template matching · Electromagnetism-like algorithm ...
Appl Intell DOI 10.1007/s10489-014-0552-y

Template matching using an improved electromagnetism-like algorithm Diego Oliva · Erik Cuevas · Gonzalo Pajares · Daniel Zaldivar

© Springer Science+Business Media New York 2014

Abstract Template matching (TM) plays an important role in several image-processing applications such as feature tracking, object recognition, stereo matching, and remote sensing. The TM approach seeks for the bestpossible resemblance between a subimage known as template and its coincident region within a source image. TM involves two critical aspects: similarity measurement and search strategy. The simplest available TM method aims for the best-possible coincidence between the images through an exhaustive computation of the normalized cross-correlation (NCC) values (similarity measurement) for all elements of the source image (search strategy). Recently, several TM algorithms that are based on evolutionary approaches have been proposed to reduce the number of NCC operations by calculating only a

D. Oliva · G. Pajares Departamento Ingenier´ıa del Software e Inteligencia Artificial, Facultad Inform´atica, Universidad Complutense, 28040, Madrid, Spain D. Oliva e-mail: [email protected] G. Pajares e-mail: [email protected] E. Cuevas () · D. Zaldivar Departamento de Electr´onica, Universidad de Guadalajara, CUCEI Av. Revoluci´on, 1500, Guadalajara, Jal, M´exico e-mail: [email protected] D. Zaldivar e-mail: [email protected] E. Cuevas · D. Zaldivar Unidad de Investigaci´on Centro Universitario Azteca, Ju´arez 340, Guadalajara, Jal, M´exico

subset of search locations. In this paper, a new algorithm based on the electromagnetism-like algorithm (EMO) is proposed to reduce the number of search locations in the TM process. The algorithm uses an enhanced EMO version, which incorporates a modification of the local search procedure to accelerate the exploitation process. As a result, the new EMO algorithm can substantially reduce the number of fitness function evaluations while preserving the good search capabilities of the original EMO. In the proposed approach, particles represent search locations, which move throughout the positions of the source image. The NCC coefficient, considered as the fitness value (charge extent), evaluates the matching quality presented between the template image and the coincident region of the source image, for a determined search position (particle). The number of NCC evaluations is also reduced by considering a memory, which stores the NCC values previously visited to avoid the re-evaluation of the same search locations (particles). Guided by the fitness values (NCC coefficients), the set of candidate positions are evolved through EMO operators until the best-possible resemblance is determined. The conducted simulations show that the proposed method achieves the best balance over other TM algorithms in terms of estimation accuracy and computational cost. Keywords Template matching · Electromagnetism-like algorithm · Evolutionary algorithms

1 Introduction Object localization and recognition in digital images are important fields of research in computer vision and image processing [1–3]. Such tasks are applied to many areas

D. Oliva et al.

including industrial inspection, remote sensing, target classification, and other important processes [4–8]. Template matching (TM) is an image-processing technique that aims to find objects in images by defining the best-possible resemblance between a subimage known as the template and its coincident region within a source image. In general, TM involves two critical points: the similarity measurement and the search strategy [9]. Although several metrics are known to evaluate the similarity between two images, the most important are the sum of absolute differences (SAD), the sum of squared differences (SSD), and the normalized cross-correlation (NCC). The calculation of such metrics is computationally expensive and represents the most time-consuming operation in the TM process [10]. Although NCC, SAD, and SSD allow adequate assessment of the similarity between two images, the NCC coefficient is the most used method due to its robustness [10]. In this paper, the NCC computation corresponds to the fitness function, in the context of the optimization methodology. On the other hand, the full search algorithm is the simplest search strategy that can deliver the optimal detection with respect to a maximal NCC coefficient as it checks all pixel-candidates one by one. Unfortunately, such exhaustive search and the NCC calculation at each checking point yields an extremely computational expensive TM method that seriously constrains its use in image-processing applications. Therefore, the goal in TM is to find the best similarity using the least NCC calculations (fitness function evaluations). Recently, several TM algorithms based on evolutionary approaches have been proposed to reduce the number of NCC operations (fitness evaluations) by calculating only a subset of search locations. Such approaches have produced several robust detectors that employ different optimization methods such as genetic algorithms (GA) [11], the particle swarm optimization (PSO) [12, 13], and the imperialist competitive algorithm (ICA) [14]. Although such algorithms allow a reduction in the number of search locations, they do not effectively explore the whole region and often suffer premature convergence, which conducts to suboptimal detections. The reason for these problems is the operators used for modifying the individual positions. In such algorithms, the new solutions are generated without considering an exploitation mechanism [15, 16]. Such a fact produces that the entire population, as the algorithm evolves, concentrates around the best particle or diverges without control, favoring the premature convergence and damaging the exploration-exploitation equilibrium [17–19]. The electromagnetism-like optimization (EMO) algorithm is a relatively new population-based evolutionary method, which was first introduced by Birbil and Fang [20] to solve unconstrained optimization problems. The algorithm emulates the attraction–repulsion mechanism between

charged particles within an electromagnetic field. Each particle represents a solution and carries a certain amount of charge, which is proportional to the solution quality. In turn, solutions are defined by position vectors, which give real positions for particles within a multidimensional space. Moreover, objective function values of particles are calculated considering such position vectors. Each particle exerts repulsion or attraction forces over other members in the population; the resultant force acting over a particle is used to update its position. Clearly, the idea behind the EMO methodology is to move particles towards the optimum solution by exerting attraction or repulsion forces among them. Different from GA, PSO, and ICA, EMO exhibits interesting search capabilities such as fast convergence while still keeping its ability to avoid local minima [15–21]. Under the EMO methodology, one particle is influenced by all other particles of the population; this allows that each individual exchanges information not only with the best so-far element, but also with the rest. The interaction among particles increases the exploration and the diversity of the population, avoiding being trapped in a local optimum [15–21]. EMO has been successfully applied to solve different sorts of engineering problems such as flow-shop scheduling [22], communications [23], vehicle routing [24], array pattern optimization in circuits [25], neural network training [26], control systems [27], and image processing [28]. Despite the EMO algorithm having the capability to find the optimal values in complex optimization problems, it presents a critical problem in the local search (LS) stage [16–29]. Such a procedure is the most time-consuming step in the overall approach because each particle is modified through a considerable number of iterations to locally improve its quality. One particular difficulty in applying an evolutionary algorithm such as EMO to discrete optimization problems, such as TM, is the multiple evaluation of the same individual. In discrete optimization problems, the search space is composed of a set of finite solutions, which implies the use of random numbers for the calculation of new individuals; so, they may encounter the same solutions (repetition) that have been visited by other individuals at previous iterations, particularly when individuals are confined to a small area [30]. Evidently, such a fact seriously constrains its performance considering that fitness evaluation is computationally expensive to calculate. In this paper, a new algorithm based on EMO is proposed to reduce the number of search locations in the TM process. The algorithm uses an enhanced EMO version where a modification of the LS procedure is incorporated to accelerate the local improvement process. Such a modification reduces the number of perturbations around each particle to a compact number of random samples. As a

Template matching using an improved electromagnetism-like algorithm

result, the new EMO algorithm can substantially reduce the number of fitness function evaluations while preserving the good search capabilities of the original EMO. In the proposed approach, particles represent pixel positions, which move through several positions within the source image. The NCC coefficient, used as a fitness value (charge extent), evaluates the matching quality presented between the template image and the coincident region of the source image, for a given search position (particle). The number of NCC evaluations (fitness function evaluations) is also reduced by considering a memory, which stores the NCC values previously visited to avoid the re-evaluation of the same particles. Guided by the fitness values (NCC coefficients), the set of encoded candidate solutions are evolved through the EMO operators until the best-possible resemblance is determined. The proposed method achieves the best balance over other TM algorithms, in terms of both estimation accuracy and computational cost. The remainder of this paper is organized as follows: Section 2 provides a description of the EMO algorithm. Section 3 provides an introduction of the TM process while Section 4 explains the modifications of the EMO to face the TM problem. In Section 5, the modified EMO algorithm is applied to the TM problem. Section 6 demonstrates experimental results for the proposed approach over standard test images whereas some conclusions are drawn in Section 7.

2 Electromagnetism-like optimization algorithm (EMO) EMO algorithm is a simple population-based search algorithm, which has been inspired by the electromagnetism phenomenon. When compared with GA, it does not use crossover or mutation operators to explore feasible regions; instead, it does implement a collective attraction–repulsion mechanism yielding a reduced computational cost with respect to memory allocation and execution time. Moreover, no gradient information is required as it employs a decimal system, which clearly contrasts to GA. Few particles are required to reach convergence as it has been already demonstrated in [15]. Initially, EMO is designed for unconstrained optimization problems [15–20], the method utilizes N , n dimensional points xi,t , i = 1, 2, ..., N where each point xi,t is an n-dimensional vector containing the  parameter 1 , . . . , xn values to be optimized xi,t = xi,t whereas t i,t denotes the iteration (or generation) number. Therefore, the feasible search space S is defined as S = xi,t ∈ n |lbd d ≤ ub ≤ xi,t d , where lbd and ubd correspond to the lower and upper bounds for the The ini dimension d, respectively.  tial population Xt = x1,t , x2,t , ..., xN ,t (where t = 1)

is taken from uniformly distributed samples of the search region S. We denote the population set at the t-th generation by Xt , because members of Xt change with t. After the initialization of Xt , EMO continues its iterative process until a stopping condition (e.g. the maximum number of generations) is met. An iteration of EMO consists of two steps. In the first step, each point in Xt moves to a different location by using the attraction–repulsion mechanism of the electromagnetism theory [31]. In the second step, points moved by the electromagnetism principle are further moved locally by a LS and then become members of Xt +1 in the (t + 1)-th generation. Both the attraction–repulsion mechanism and the LS in EMO are responsible for driving the members, xi,t , of Xt to the proximity of the global optimum. Following the principles of the electromagnetism theory for charged particles, each point xi,t ∈ Xt in the search space S is assumed as a charged particle where its charge depends on its objective function value f (xi,t ). Points holding a better objective function value have higher charges than other points. In the attraction–repulsion mechanism, points with a high charge attract other points in Xt , and points with less charge repel other points. t Then, force vector  the total  Fi , exerted at each partit t t cle Ft = F1 , F2 , . . . , FN is calculated by adding these attraction–repulsion forces. The use of Fit instead fit for the force vectors is assumed to avoid confusions with the fitness function value. Afterwards, each particle xi,t ∈ Xt is moved in the direction  of its total force  to a new location yi,t , forming Yt = y1,t , y2,t , . . . , yN ,t . Finally, the so-called LS procedure is applied to each particle yi,t from Yt to build the next population Xt +1 . The motivation behind the utilization of LS is to explore the possibility of finding a solution with a better objective function within the neighborhood of yi,t . LS is an iterative method, which is executed until either a better solution is found or a determined maximum number of iterations ITER has been reached. At each iteration, the particle yi,t is modified in two steps. In the first step, one of the n different coordinates of yi,t is randomly chosen. Then, in the second step, a modified position zi,t is built by adding a random number within the interval (-δ, δ) to the selected coordinate of yi,t at step one. If the objective value of zi,t is better than yi,t (f (zi,t ) > f (yi,t )), the LS procedure ends and the new particle xi,t +1 of the next population Xt +1 assumes the position zi,t ; otherwise, the next iteration is executed. On the other hand, if the maximum number of iterations ITER has been reached with no improvement on the fitness value of yi,t , then the new particle xi,t +1 adopts the unaltered value of yi,t . Since a large number of iterations (fitness evaluations) are necessary before delivering a satisfying result, the LS procedure is the main drawback of the EMO algorithm.

D. Oliva et al.

Algorithm 1 shows the general scheme of EMO. In the following paragraphs, a description of each step is provided.

Input parameters (Line 1) EMO algorithm is executed for Ng generations. In the LS phase, ITER is the maximum number of iterations used in the improvement process, whereas δ represents the maximum perturbing distance employed by LS. Initialize (Line 2) An initial set X1 of N particles is created. The dimension of each initial particle xi,1 =  n 1 xi,1 , . . . , xi,1 of X1 is randomly generated considering a uniformly distributed value between the prespecified lower initial parameter bound lbd and the upper initial parameter bound ubd , just as it is described by the following expression: d xi.1

= lbd + rand (0, 1).(ubd − lbd )

(1)

The objective function values f (xi,1 ) are computed, and the B best point xB t is identified. Such a best point xt is calculated as follows: xbt

= arg max {f (xi,t )},

(2)

xi,t Xt

Calculate force (Line 4) In the EMO algorithm, the force computation is inspired on the superposition principle that is taken from the electromagnetism theory and states: “the force exerted on a point via other points is inversely proportional to the distance between the points and directly proportional to the product of their charges” [24]. Figure 1 shows the superposition principle adapted for the EMO’s approach. In Fig. 1, each black point represents a charged particle where its charge value is represented by qi . Fi,j is

q1

the exerted force between particles i and j . The Coulomb law is another principle used in EMO to determine if a particle is attracted or repelled depending on its resultant force direction (Fig. 2). To calculate the exerted force among particles, it is necessary to first compute the charge qi,t of each particle xi,t from Xt . The charge qi,t of xi,t depends on its fitness value f (xi,t ). Points holding a better objective function possess more charge than others. Therefore, the charge qi,t of a given particle xi,t is computed as follows: ⎛



⎜ ⎟ ⎜ ⎟ f (xi,t ) − f (xB t ) ⎟ qi,t = exp ⎜ −n ⎜ ⎟, N

⎝ B f (x ) − f (x ) ⎠ i,t

j =1

(3)

t

t where xB t is the best particle of Xt . Then, the force Fi,j exerted between the points xi,t and xj,t is calculated by using:

t Fi,j =

⎧ qi,t ·qi,t ⎪ ⎪ ⎨ (xj,t − xi,t ) x −x 2 if f (xi,t ) > f (xj,t ) i,t

i,t

qi,t ·qi,t

⎪ ⎪ ⎩ (xj,t − xi,t ) x

2  i,t −xi,t 

if f (xi,t ) ≤ f (xj,t )

(4)

F2,3

q3

F3

q2 Fig. 1 The superposition principle

F1,3

Fig. 2 Coulomb law: α represents the distance between charged particles, q1 , q2 are the charges, and F is the exerted force as has been generated by the charge interaction

Template matching using an improved electromagnetism-like algorithm

Finally, the resultant force Fit that acts over xi,t is calculated as follows: Fit

=

N 

t Fi,j

(5)

j =1,j  =i

Move the point xi,t along Fit (Line 5) In this step, each point xi,t of the population Xt is moved to a new position yi,t in the direction of the resultant force Fit . Such a movement is applied to all elements of Xt except for the best element xB t . Therefore, the new position yi,t is calculated as follows: yi,t = xi,t

Ft + λ  it  (RN G), i = 1, 2, ..., N ; i = B, (6) F  i

where λ is a uniformly distributed random number, for each coordinate of xi,t , B is the index that corresponds to the best element xB t of Xt , and RN G denotes the allowed range of movement toward the lower (lbd ) or upper (ubd ) bound for the corresponding dimension d (d ∈ [1, . . . , n]). Local search (Line 6) In this stage, an LS procedure is applied to each particle yi,t from Yt to build the next population Xt +1 . The motivation behind the utilization of LS is to explore the possibility of finding a solution with a better objective function in the neighborhood of yi,t . In LS, first, one of the n different coordinates of yi,t is randomly chosen. Then, a modified position zi,t is built, by adding a random number within the interval (−δ, δ) to the selected coordinate of yi,t . If the objective value of zi,t is better than yi,t (f (zi,t ) > f (yi,t )), the LS procedure ends and the new particle xi,t +1 of the next population Xt +1 assumes the position zi,t ; otherwise, the procedure is repeated. After ITER iterations, if the fitness value of yi,t has not been improved, the new particle xi,t +1 would adopt the unaltered value of yi,t . Since the EMO LS procedure needs a large number of fitness evaluations before delivering a satisfying result, its use is prohibitive for those applications in which computationally expensive objective functions are involved. One example of such applications is TM. Under such circumstances, our approach employs an enhanced EMO version where a modification of the LS procedure is proposed to accelerate the local improvement process. All evolutionary methods have been designed in the way that regardless of the starting point, there exists a good probability to find either the global optima or a good enough suboptimal solution. However, most of the approaches lack a formal proof of such a convergence. One exception is the EMO algorithm for which a complete convergence analysis has been developed in [15]. Based on a Markov model and properties of the EMO operators, such a study demonstrates the existence of a high probability of at least one particle, of the population Xt , can move closer to the set

of optimal solutions after a few iterations. Therefore, the EMO method can effectively deliver the solution for complex optimization problems while requiring a low number of iterations in comparison with other evolutionary methods. Such a fact has been demonstrated through several experimental studies for EMO [18–21] where its computational cost and its iteration number have been compared with other evolutionary methods for the case of several engineering problems.

3 TM Process Beginning from the problem of locating a given reference image (template) R over a larger intensity image I , the task is to find those positions at image I whose coincident region matches with R or at least is the most similar. If Ru,v (x, y) = R(x − u, y − v) where x and y are the position coordinates of R and the reference image R is shifted by the distance (u, v) in the horizontal and vertical directions, respectively. Then, the matching problem (illustrated in Fig. 3) can be summarized as follows: considering the source image I and the reference image R, find the offset (u, v) within the search region S such that the similarity between the shifted reference image Ru,v (x, y) and the corresponding subimage of I is maximum. To successfully solve this task, two issues need to be addressed: first, the determination of an appropriate similarity value to validate that a match has occurred. Second, the development of an efficient search strategy to find the optimal displacement. Although several metrics are known

N

(1,1)

x

R1,1

m u

n v

M

Ruv,

I y Fig. 3 Geometry of template matching. The reference image R is shifted across the search image I by an offset (u, v) using the origins of the two images as reference points. The dimensions of the source image (MxN) and the reference image (m×n) determine the maximal search region (S) for this comparison

D. Oliva et al.

to evaluate the similarity between two images, the most important are the sum of absolute differences (SAD), the sum of squared differences (SSD), and the normalized cross-correlation (NCC). The calculation of such metrics is computationally expensive and represents the most consuming time-operation in the TM process [10]. Although NCC, SAD, and SSD allow adequate assessment of the similarity between two images, the NCC coefficient is the most used method due to its robustness [10]. The NCC value between a given image I of size M×N and a template image R of size m×n, at the displacement (u, v), is given by: NCC(u, v) n  m   

I (u + i, v + j ) − I¯(u, v) · R(i, j ) − R¯

(7)

i=1 j=1

=  2  2   m m n n

 ¯ ¯ I (u + i, v + j ) − I (u, v) · R(i, j ) − R i=1 j=1

i=1 j=1

where I¯(u, v) is the gray-scale average intensity of the source-image for the coincident region of the template image R whereas R¯ is the gray-scale average intensity of the template image. These values are defined as follows: I¯(u, v) =

m n 1  I (u + i, v + j ) m·n i=1 j=1

R¯ =

m m 1  R(i, j ) m·n

(8)

i=1 j=1

The NCC operation delivers values between the interval [−1, 1]. Thus, if N CC = 1, the similarity is the best possible whereas if N CC = −1, the template and the corresponding image are completely different. A special case can be found when a template is compared with a coincident black region (a region containing pixels with value 0) in the source image. In such cases, to avoid an indetermination in Eq. 7, the NCC value is set by default to -1. According to the NCC value, the point (u, v) that presents the best-possible resemblance between R and I is defined as follows: (u, v) = arg max NCC(u, ˆ vˆ ) (u,ˆ ˆ v)∈S

(9)

   where S = (u, ˆ vˆ ) 1 ≤ uˆ ≤ M − m, 1 ≤ vˆ ≤ N − n . To determine the position (u, v) where the maximum similarity (maximum NCC value) is presented, a search strategy is necessary. The full search algorithm is the simplest search strategy that can deliver the optimal detection with respect to a maximal NCC coefficient because it checks all pixel-candidates one by one. Unfortunately, such an exhaustive search and the NCC calculation at each checking point yields an extremely computational expensive TM method that seriously constrains its use in image-processing applications. Figure 4 illustrates the TM process considering Figs. 4a and b as the source and template image, respectively. It is important to point out that the template image (4b) is similar but not equal to the coincident pattern, contained

in the source image (4a). Figure 4c shows the NCC values (color-encoded) calculated in all locations of the search region S (full search strategy). On the other hand, Fig. 4d presents the NCC surface, which exhibits the highly multimodal nature of the TM problem. Figure 4c and d show that the surface of the NCC values holds several local maxima and only one global maximum. For this reason, classical optimization methods, in particular those based on gradient-techniques, could be trapped in local optimal values.

4 Modifications of EMO to face the TM problem In this paper, the use of modified EMO method is proposed as a search strategy to solve the task of TM. The new implementation avoids two traditional drawbacks of the original EMO algorithm: the use of a large number of fitness evaluations and the re-evaluation of already-visited individuals. In this section, two modifications are described to enhance the EMO performance. Such modifications involve the construction of a new LS procedure and the incorporation of a memory to store the fitness values already calculated. As a result, the new EMO algorithm can substantially reduce the number of fitness function (NCC under the TM approach) evaluations while preserving the good search capabilities of the original EMO. 4.1 The new LS procedure for EMO The LS procedure represents the exploitation phase of the EMO algorithm. Exploitation is the process of refining existent individuals within a small neighborhood to improve their solution quality. In the literature [15, 32], two LS approaches for EMO are proposed: the LS applied to all points and the LS applied only to the current best point. For the former, it has been proved in [15] that EMO presents the most convenient convergence properties still achieving global optimization. However, under such circumstances, experimental studies [16, 32] show that the LS procedure spends more than 80 % of the computational time of the overall EMO process. On the other hand, if the LS is applied to the current best particle only, experimental results provided in [32, 33] demonstrate that the EMO search capabilities are seriously weakened, mainly when it faces complex optimization functions. Under such circumstances, to reduce the computational time (number of fitness evaluations or NCC) while avoiding any demeaning of the good search capabilities of the EMO algorithm, a new LS procedure is proposed in this work. Different from other approaches [15, 32, 33], the new LS method considers a trade-off between using all particles and the best one. Such a balance is

Template matching using an improved electromagnetism-like algorithm

(a)

(b)

(c)

(d)

Fig. 4 Template matching process. a Example source image, b template image, c color-encoded NCC values, and d NCC multimodal surface

obtained by perturbing the most promising solutions of the population. The new procedure is a selective operation, which is applied only to a subset Et of the modified population Yt (where Et ⊆ Yt ). In the new LS approach, first it is necessary to sort Yt according to their fitness values and store the sorted elements in a temporal population B = {b1 , b2 , . . . , bN }. The idea is to consider only the promising positions. Under the new LS mechanism, a subspace each selected particle bj ∈  Cj is created around  B bj = bj1 , . . . , bjn . The size of Cj depends on the distance ed which is defined as follows: n

ed =

(ubq − lbq )

q=1

n

•β

(10)

where ubq and lbq are the upper and lower bounds in the q-th dimension, n is the number of dimensions of the optimization problem, whereas β ∈ [0, 1] is a tuning factor. Therefore, the limits of Cj are modeled as follows: q

q

q

ussj = bj + ed q

q

lssj = bj − ed

(11)

q

where ussj and lssj are the upper and lower bounds of the q-th dimension for the subspace Cj , respectively. Considering the subspace Cj around bj of  each element B, a set of h new particles Phj = pj1 , pj2 , . . . , pjh are randomly generated inside the bounds defined by (11). Once the h samples are generated, the particle xj,t +1 of the next population Xt +1 must be created. To calculate xj,t +1 , the best particle pbest from the h samj , in terms of fitness value    best ples where pj ∈ pj,1 , pj,2 , . . . , pj,h , is compared to

D. Oliva et al.

bj . If pbest is better than bj according to their fitness valj ues, xj,t +1 is updated with pjbest ; otherwise, bj is selected. The elements of B ⊆ Yt that have not been processed by the new LS procedure transfer their corresponding values to Xt +1 with no change. The new LS procedure is a selective operation, which is applied only to a subset Et of the modified population Yt (where Et ⊆ Yt ). The elements of Et must fulfill two conditions. The first one is that each particle bj ∈ B must be part of the first half of the temporal population B, which is produced after sorting Yt . The second condition is that no particle holding a better fitness value than bj must be located inside Cj . The first condition assures that only individuals holding a high fitness value can be considered to be exploited. Since the EMO algorithm tends to concentrate over a determined solution as the method evolves, according to the second condition, the number of particles to be exploited will decrease at each iteration. Such a behavior reflects the fact that a high concentration of particles in a solution represents an extensive exploitation itself. Under such conditions, it is not necessary to apply the LS again. The proposed LS approach is used to exploit prominent solutions. Such an LS mechanism reduces the computational effort produced by the traditional LS proposed in [20]. The new procedure sorts the population according to their fitness values. Only best elements of the population are used; however, they must be enough separated from each other to be selected. The distance ed has a double purpose: first, it is used to determine which particles are taken from the sorted population. The selected elements must be separated by at least twice the exploitation distance value ed . The second purpose of ed is to create the subspaces from which new particles will be taken. Therefore, inside each subspace, h random samples are selected. From all samples, the best one, according to its fitness value, is selected. To update the positions of the EMO particles, the best samples from each space are compared with their respective original particles from the sorted population. If the sample has a better fitness value than its original particle, its position is updated; otherwise, it does not suffer any modification. Since the number of selected samples in each subspace is very small (h < 4), the use of the new LS substantially reduces the number of fitness function evaluations. On the other hand, the use of the distance rule to discard particles within subspaces guarantees that during the evolution process, fewer subspaces would be created and consequently fewer samples would be evaluated. At the end of the optimization process, only the best particle is exploited, since other particles are discriminated by their close distance. The complete new LS procedure is described by Algorithm 2.

To demonstrate the new LS procedure operation, a numerical example has been set by applying the proposed process to a simple function. Such a function considers the interval of −3 ≤ d1 ,d2 ≤ 3 whereas the function possesses one global maxima of value 8.1 at (0, 1.6). Notice that d1 and d2 correspond to the axis coordinates (commonly x and y). For this example, a modified population Yt of 6 two-dimensional members (N =6) is assumed. Figure 5a shows the initial configuration of the proposed example; the black points represent the half of the particles with the best fitness values (the first half of B) whereas the gray points correspond to the remaining particles. From Fig. 5a, it can be seen that the new LS procedure is applied to all black particles (y1,1 = b1 , y3,1 = b2 and y5,1 = b3 ) yielding two new random particles (characterized by white points P21 , P23 , and P25 ) for each black one inside their corresponding subspaces (C1 , C3 , and C5 ). Such an operation is executed over y1,1 , y3,1 , and y5,1 because they fulfil the two necessary conditions. Consider particle y3,1 in Fig. 5a. The yellow particle corresponds to the best particle (pbest 3 ) from the two randomly generated particles (according to their fitness values) within C3 . The particle pbest will substitute y3,1 3 in the individual x3,2 for the next generation, since it holds  a better fitness value than y3,1 f (y3,1 ) < f (pbest 3 ) . Figure 5b shows the particle configuration after 15 new iterations. Under such a configuration, the elements of Y15 hold the following fitness values:f (y1,15 ) = 2.98, f (y2,15 ) = 1.13, f (y3,15 ) = 8.03, f (y4,15 ) = 3.37, f (y5,15 ) = 7.21, and f (y6,15 ) = 0.84. The particles that have been moved to new positions reduce the number of times that the new LS is executed. From Fig. 5b, it is

Template matching using an improved electromagnetism-like algorithm Fig. 5 Operation of the new LS procedure, a operation considering the initial modified population Y1 , b operation considering the 15th modified population Y15 , and c operation considering the 25th modified population Y25

y 5,1

C3

y 2,1

C5

y 3,1

y 1,1

C1

(a)

y 6,1 y 4,1

y 5,15 y 3,15

y 2,15

(b)

(c)

C3

C5 y 1,15 y 4,15

y 6,15

C4

D. Oliva et al.

evident that particles y3,15 and y5,15 are so close that they share positions inside the subspaces C3 and C5 . Since the subspace C5 contains another element (y3,15 ) with a fitness value better than y5,15 , it does not fulfil the necessary conditions to be processed by the new LS. Therefore, only the point y3,15 will be processed, generating two new random points inside C3 . As the iteration number increases, the particles tend to concentrate around a solution. Figure 5c shows the particle configuration at the 25th iteration. As it can be seen, all particles are grouped around the optimal value. Under such circumstances, the new LS procedure is processed only once, since such a concentration works as a kind of exploitation process, with several particles trying to refine those points of a search space within the neighborhood of a well-known solution. 4.2 Memory incorporation The TM approach aims for the best-possible resemblance between R and I , within the search space S that gathers a set of finite positions. However, since EMO algorithm employs random numbers for the calculation of new elements, such particles may encounter the same solutions (repetition), i.e. locations that have been visited by other individuals at previous iterations. Such a fact seriously constrains the EMO performance since the fitness evaluation (NCC calculation) is computationally expensive. To enhance the performance of the search strategy, the number of NCC (fitness function) evaluations is reduced by incorporating a fitness memory (FM) to store the NCC values previously visited, avoiding the re-evaluation of the same particle positions. The FM memory contains a list that includes the search position and its corresponding NCC value. Therefore, for a given search position, the FM memory is verified in advance to check whether such a position is already contained, indicating that an evaluation is not further required. Otherwise, the NCC of such a search position is calculated and stored at the FM memory for its later use. Such a memory mechanism can be apparently considered as a disadvantage when due to the image size several fitness values must be stored. However, since the memory FM keeps only the already computed particles, the number of necessary memory allocations is so few in comparison with the image size. Therefore, its implementation is possible even in systems with modest computational resources.

evaluate the matching quality presented between the template image R and the source image I , for a determined search position (individual). The number of NCC (fitness function) evaluations is reduced by considering an enhanced version of the EMO algorithm as a search strategy. Therefore, guided by the fitness values (NCC coefficients), the set of encoded candidate positions are evolved using the modified EMO operators until the best-possible resemblance can be found. In the algorithm, the search space S consists of a set of 2D search positions uˆ and vˆ representing the components of each search location. Considering that the size of I is M ×N and the size of R is m × n, the limits of the search space S are set by the difference of sizes between the source image and the template. For lower bound corresponds to lb1 = M −m and ub2 = N −n. Therefore, each particle is encoded as:     xi = (xi1 = uˆ i , xi2 = vˆ i ) 1 ≤ uˆ i ≤ lb1 , 1 ≤ vˆ i ≤ ub2 , (12) xi1

xi2

where and represent the variables (positions) to be modified during the optimization process. Considering that each particle xi symbolizes a location (uˆ i , vˆ i ) inside the search space S and NCC(uˆ i , vˆ i ) represents the similarity quality (fitness value) for this position, the proposed approach involves the following procedure. First, the fitness memory (FM) is initialized as an empty array. The next step is to create an initial population of particles within the search space. Then, all particles are evaluated in the fitness function (NCC) and its values are stored in the respective positions of FM. Under the EMO approach, the charge of each particle is emulated by the NCC value. Afterwards, the total force vector is calculated and the particle positions are updated. Each time that a new particle must be evaluated, it is necessary to verify in FM if it has been already evaluated. If such a particle exists in FM, it is not necessary to calculate it; otherwise, it is evaluated and its value is stored in FM. Once the particle positions are updated, the new LS procedure is applied to improve their solution quality. This procedure is repeated until a number of iterations have been reached. Therefore, the proposed EMO-TM algorithm can be summarized as follows: Step 1: Step 2: Step 3:

5 TM using EMO as search strategy

Step 4:

In the proposed algorithm, particles represent search positions, defined by (u, v), which move throughout the search space S. The NCC coefficient is used as a fitness value to

Step 5:

Read gray-scale image I . Select the template R. Initialize the fitness memory (FM) as an empty array. Initialize a set X1 of N particles within the search space S (1). Evaluate the NCC coefficient of the entire population (7) and (8) and store the values in the FM.

Template matching using an improved electromagnetism-like algorithm Fig. 6 Search-pattern generated by the EMO-TM algorithm. Green points represent the evaluated search positions whereas blue points indicate the already visited locations that have been stored at the fitness memory. The red point exhibits the optimal match detection

Step 6:

Step 7: Step 8:

Step 9: Step 10: Step 11:

Compute the charge qi,t and the total force t of the particles using (3, 4), and vector Fi,j (5). Move the particles to new positions based on the force (6). Before evaluating the NCC values for the modified particles, analyze the memory FM, in order to verify which positions have been already calculated. Apply the new LS procedure (see Section 4.1: Algorithm 2). Select the best particle xtB that has higher NCC value (2). If the number of iterations has been reached, then determine the best individual (matching position) of the final population uˆ best , vˆ best ; otherwise go to step 6.

The proposed EMO-TM algorithm considers multiple search locations during the complete optimization process. However, only few of them are evaluated using the true fitness function whereas all other remaining positions are just taken from the memory FM. Figure 6 shows a section of the search-pattern that has been generated by the EMO-TM approach considering the problem exposed in Fig. 5. Such a pattern exhibits the evaluated search-locations in greencells, whereas the maximum location is marked in red. Blue-cells represent those that have been repeatedly chosen, whereas cells featuring other gray intensity levels were not visited at all during the optimization process.

6 Experimental results To verify the feasibility and effectiveness of the proposed algorithm, a set of comparative experiments with other TM algorithms have been conducted. Such experiments have

considered a set of images, which are shown in Table 1 with their respective templates and sizes. To illustrate the complexity of TM as an optimization problem, Table 1 also presents the optimization surfaces (NCC surface) produced after the use of the full search approach. All experiments have been performed in MatLAB© over the same computer with an Intel Core 2 Duo 1.6G-HZ processor, running Windows Vista operating system over 2 GB of memory. The results obtained by the proposed algorithm have been compared to those produced by similar works reported in the literature such as the ICA-TM method [14], the GA-TM [11], the PSO-TM [13] and the original EMO algorithm [20]. The first three approaches are considered to be state-of-the-art algorithms whose results have been recently published. However, the original EMO method has been included only to validate the performance of the enhanced EMO version. The maximum generation number for the experimental set has been set to 300. Such a stop criterion has been selected to maintain compatibility with the works that have been used in the comparison [11–14]. The parameter setting for each algorithm in the comparison is described as follows: 1.

2.

3.

ICA-TM [14]: NumOfCountries = 100, NumOfImper = 10, NumOfColony = 90, Tmax = 300, ξ = 0.1, ε1 = 0.15, and ε2 = 0.9. Such values are the best parameter set for this algorithm according to [14]. PSO-TM [13]: Swarm size = 50, inertia weight = 0.3925, particle best weight = 2.55, swarm best weight = 1.33, and Max iter = 300. Such values, according to [34], represent the best-possible configuration. GA-TM [11]: the population size is 70, the crossover probability is 0.55, the mutation probability is 0.10, the number of elite individuals is 2, and the generation number is 300. The roulette wheel selection and the 1-point crossover are also applied.

D. Oliva et al. Table 1 EMO-TM applied to different kinds of images

Template matching using an improved electromagnetism-like algorithm Table 2 Performance comparison of ICA-TM, PSO, DE, O-EMO, and the proposed approach for the experimental set shown in Table 1 Image

Algorithm

Average NCC value (ANcc)

Success rate (Sr) %

Average number of search locations (AsL)

Computational time al time (CT)

Number of iterations (NI)

Dog

ICA-TM PSO GA O-EMO EMO-TM

0.8856 0.6681 0.5631 0.8281 1.0000

70.21 34.28 45.58 90.54 100

29500 35680 32697 46140 13800

74.69 73.43 101.23 78.99 34.92

150 180 172 109 72

Soccer

ICA-TM PSO GA O-EMO EMO-TM

0.5054 0.4050 0.3753 0.7150 1.0000

14.65 2.85 22.98 20.34 100

29621 35240 30987 45000 16920

10.09 12.31 19.30 11.96 3.80

165 85 139 144 57

Waldo

ICA-TM PSO GA O-EMO EMO-TM ICA-TM PSO GA O-EMO EMO-TM

0.6587 0.2154 0.2057 0.6422 0.9598 0.6959 0.5655 0.5676 0.9170 1.0000

60.43 2.05 2156 70.33 98.00 54.42 2.85 28.56 59.51 100

28786 32169 31875 44775 16044 29177 28978 25921 45580 17650

10.85 10.54 17.80 12.36 7.10 55.23 62.59 78.96 61.38 47.70

128 99 141 188 48 67 85 174 142 35

City

ICA-TM PSO GA O-EMO EMO-TM

0.2656 0.1777 0.1583 0.6166 0.9843

46.21 2.00 20.56 74.75 97.00

29399 31213 30578 44651 15830

8.84 5.90 11.53 9.21 3.70

61 102 114 127 35

PCB

ICA-TM PSO GA O-EMO EMO-TM

0.3136 0.2090 0.2015 0.6921 0.9067

51.04 2.00 18.26 74.81 92.65

28985 30459 36987 47689 16990

69.38 75.96 120.69 89.74 58.98

135 120 229 195 151

Airport

4.

5.

The original EMO (O-EMO) [15–20]: particle number=50, δ = 0.001, LISTER=4, and MaxIter= 300. Such values, according to [15–20], represent the best-possible configuration. EMO-TM: N = 50, m = 5, β = 0.05, and I T ER = 300. These values have been found to deliver the bestpossible performance for the TM tasks.

The configuration parameters for all algorithms are kept with no modifications during all the experimental work. The comparisons have been analyzed considering five performance indexes: the average NCC value (ANcc), the success rate (Sr), the average number of search locations (AsL),

the number of iterations (NI), and the averaged computational time (CT). The average NCC value (ANcc) indicates the average NCC value considering the total number of executions. The success rate (Sr) represents the number of executions in percentage in which the algorithms successfully find out the optimal detection point. The average number of search locations (AsL) exhibits the number of checked locations which have been visited during a single experiment. Such a performance index can be related to the average number of function evaluations that the NCC coefficient has computed. The number of iterations (NI) indicates the iteration in which the best match has been found. Finally, the computational time (CT) registers the spent time, in

D. Oliva et al.

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 7 Comparison of the NCC values for the images: a Dog, b Soccer game, c Waldo, d Airport, e City, f PCB, all images using the five different algorithms

Template matching using an improved electromagnetism-like algorithm Table 3 p-Values produced by Wilcoxons test comparing EMO-TM vs. ICA-TM, EMO-TM vs. PSO, EMO-TM vs. DE, and EMO-TM vs. O-EMO over the average number of search locations (AsL) values from Table 2 Image

EMO-TM vs. ICA-TM

EMO-TM vs. PSO

EMO-TM vs. DE

EMO-TM vs.O-EMO

Dog Soccer Waldo Airport City PCB

6.5304e-13 6.5335e-13 6.5304e-13 6.5304e-13 6.5154e-13 6.5304e-13

6.4894e-08 8.5675e-11 5.7977e-12 4.2026e-12 1.6791e-11 8.5921e-12

7.0548e-13 2.9973e-12 8.3789e-13 9.5761e-12 7.7189e-13 4.3589e-12

6.5154e-13 6.5094e-13 6.5034e-13 6.5064e-13 6.4825e-13 6.4462e-13

seconds, for each algorithm. To assure statistic consistency, all these performance indexes are calculated considering 35 independent executions. The results for 35 runs are reported in Table 2 where the best outcome for each image is boldfaced. According to this table, EMO-TM delivers better results than ICA, PSO, GA, and O-EMO for all images. In particular, the test remarks the largest difference in the success rate (Sr), the average number of search locations (AsL), computational time (CT), and number of iterations (NI). Table 2 demonstrates that EMO-TM present a better precision in reference to its counterparts, reaching the best average NCC values (near to 1). Likewise, the EMO-TM presents a better performance than the other algorithms in terms of its effectiveness, since it detects the optimal point at practically all experiments. On the other hand, the proposed EMO-TM algorithm has been able to drastically reduce the number of search locations (such a value is represented by the number of NCC evaluations), the computation time, and the number of iterations. It is important to recall that the NCC evaluation represents the main computational cost associated with the TM process. The outstanding performance of EMO-TM is directly related to a better trade-off between convergence and computational overhead that is provided by the incorporation of the new LS and the FM memory. Figure 7 presents the matching evolution curve for each image considering the average best NCC value seen so far for all the algorithms that have been employed in the comparison. Such evolution graphs have been computed considering a single execution. A nonparametric statistical significance proof known as the Wilcoxon’s rank sum test for independent samples [34, 35] has been conducted over the average number of search locations (AsL) data of Table 2, with 5 % significance level. Table 3 reports the p-values produced by Wilcoxon’s test for the pairwise comparison of the average number of search locations (AsL) of two groups. Such groups are formed by EMO-TM vs. ICA-TM, EMO-TM vs. PSO, EMO-TM vs. DE, and EMO-TM vs. O-EMO. As a null hypothesis, it is assumed that there is no significant difference

between mean values of the two algorithms. The alternative hypothesis considers a significant difference between the AsL values of both approaches. All p-values reported in Table 3 are less than 0.05 (5 % significance level), which is a strong evidence against the null hypothesis. Therefore, such evidence indicates that EMO-TM results are statistically significant and that it has not occurred by coincidence (i.e. due to common noise contained in the process).

7 Conclusions In this paper, a new algorithm based on the electromagnetism-like algorithm (EMO) is proposed to reduce the number of search locations in the TM process. The algorithm uses an enhanced EMO version, which includes a modification of the LS procedure to accelerate the exploitation process. Such a modification reduces the number of perturbations around each particle to a compact number of random samples. As a result, the new EMO algorithm can substantially reduce the number of fitness function evaluations while preserving the good search capabilities of the original EMO. In the proposed approach, particles represent search positions, which move throughout the positions of the source image.The NCC coefficient, used as a fitness value (charge extent), evaluates the matching quality presented between the template image and the coincident region of the source image for a given search position (particle). The number of NCC (fitness function) evaluations is also reduced by considering a memory that stores the NCC values previously visited to avoid re-evaluation of the same particles. Guided by the fitness values (NCC coefficients), the set of candidate positions are evolved using the EMO operators until the best-possible resemblance is found. The proposed method achieves the best balance over other TM algorithms in terms of both estimation accuracy and computational cost. The performance of the proposed approach has been compared to other existing TM algorithms by considering different images, which present a great variety of

D. Oliva et al.

formats and complexities. Experimental results demonstrate the high performance of the proposed method in terms of precision and the number of NCC evaluations.

Acknowledgments The first author acknowledges The National Council of Science and Technology of Mexico (CONACyT) for the doctoral Grant number 215517, The Ministry of Education (SEP) and the Mexican Government for partially support this research.

References 1. Ram´ık DM, Sabourin C, Moreno R, Madani K (2014) A machine learning based intelligent vision system for autonomous object detection and recognition. Appl Intell 40(2):358-375 2. Julius Hossain M, Ali Akber Dewan M, Oksam Chae A (2012) Flexible edge matching technique for object detection in dynamic environment. Appl Intell 36(3):638–648 3. Cuevas E, Gonz´alez M (2013) Multi-circle detection on images inspired by collective animal behaviour. Appl Intell 39(1):101– 120 4. Brunelli R (2009) Template Matching Techniques in Computer Vision, Theory Pract. Wiley 5. Crispin AJ, Rankov V (2007) Automated inspection of PCB components using a genetic algorithm template-matching approach. The Int J Adv Manuf Technol 35:293–300 6. Li J, Yan J, Guo C (2011) Research and implementation of image correlation matching based on evolutionary algorithm International conference future computer science and education (ICFCSE). Aug. 2011., vol 20–21, pp 499–501 7. Wang Y, Qi Y (2013) Memory-based cognitive modeling for robust object extraction and tracking. Appl Intell 39(3):614– 629 8. Matei O, Pop PC, V˘alean H (2013) Optical character recognition in real environments using neural networks and k-nearest neighbor. Appl Intell 39(4):739–748 9. Hadi G, Mojtaba L, Hadi SY (2009) An improved pattern matching technique for lossy/lossless compression of binary printed Farsi and Arabic textual images. Int J Intell Comput Cybernet 2(1):120–147 10. Krattenthaler W, Mayer KJ, Zeiler M (1994) Point correlation: A reduced-cost template matching technique. In: Proceedings of the first IEEE International Conference on Image Processing, pp 208– 212 11. Dong N, Wu C-H, Ip W-H, Chen Z-Q, Chan C-Y, Yung K-L (2011) An improved species based genetic algorithm and its application in multiple template matching for embroidered pattern inspection. Expert Syst Appl 38:15172– 15182 12. Fang L, Haibin D, Yimin D (2012) A chaotic quantum-behaved particle swarm optimization based on lateral inhibition for image matching. Optik 123:1955–1960 13. Wu C-H, Wang D-Z, Ip A, Wang D-W, Chan C-Y, Wang H-F (2009) A particle swarm optimization approach for components placement inspection on printed circuit boards. J Intell Manuf 20:535–549 14. Haibin D, Chunfang X, Senqi L, Shan S (2010) Template matching using chaotic imperialist competitive algorithm. Pattern Recogn Lett 31:1868–1875 15. Birbil SI, Fang SC, Sheu RL (2004) On the convergence of a population-based global optimization algorithm. J Glob Optim 30(2):301–318

16. Rocha A, Fernandes E (2009) Hybridizing the electromagnetismlike algorithm with descent search for solving engineering design problems. Int J Comput Math 86:1932–1946 17. Afonso LD, Mariani VC, Coelho L (2013) Modified imperialist competitive algorithm based on attraction and repulsion concepts for reliability-redundancy optimization. Expert Syst Appl 40(9):3794–3802 18. Arani BO, Mirzabeygi P, Panahi MS (2013) An improved PSO algorithm with a territorial diversity-preserving scheme and enhanced exploration–exploitation balance. Swarm Evol Comput 11:1–15 19. Cuevas E, Echavarr´ıa A, Ram´ırez-Orteg´on (2014) An optimization algorithm inspired by the States of Matter that improves the balance between exploration and exploitation. Appl Intell 40(2):256– 272 20. Ilker B., Birbil S., Shu-Cherng F. (2003) An electromagnetismlike mechanism for global optimization. J Glob Optim 25:263– 282 21. Rocha A, Fernandes E (2009) Modified movement force vector in an electromagnetism-like mechanism for global optimization. Optim Methods Softw 24:253–270 22. Naderi B, Tavakkoli-Moghaddam R, Khalili M (2010) Electromagnetism-like mechanism and simulated annealing algorithms for flowshop scheduling problems minimizing the total weighted tardiness and makespan. Knowl -Based Syst 23:77–85 23. Hung H-L, Huang Y-F (2011) Peak to average power ratio reduction of multicarrier transmission systems using electromagnetismlike method. Int J Innov Comput Inf Control 7(5A):2037–2050 24. Yurtkuran A, Emel E (2010) A new hybrid electromagnetism-like algorithm for capacitated vehicle routing problems. Expert Syst Appl 37:3427–3433 25. Jhen-Yan J, Kun-Chou L (2009) Array pattern optimization using electromagnetism-like algorithm. AEU Int J Electron Commun 63:491–496 26. Wu P, Wen-Hung Y, Nai-Chieh W (2004) An electromagnetism algorithm of neural network analysis an application to textile retail operation. J Chin Inst Ind Eng 21:59–67 27. Lee CH, Chang FK (2010) Fractional-order PID controller optimization via improved electromagnetism-like algorithm. Expert Syst Appl 37:8871–8878 28. Cuevas E, Oliva D, Zaldivar D, P´erez-Cisneros M, Sossa H (2012) Circle detection using electro-magnetism optimization. Inf Sci 182(1):40–55 29. Guan X, Dai X, Li J (2011) Revised electromagnetism-like mechanism for flow path design of unidirectional AGV systems. Int J Prod Res 49(2):401–429 30. Cuevas E (2013) Block-matching algorithm based on harmony search optimization for motion estimation. Appl Intell 39(1):165– 183 31. Cowan EW (1968) Basic Electromagnetism. Academic Press, New York 32. Lee CH, Chang FK (2010) Fractional-order PID controller optimization via improved electromagnetism-like algorithm. Expert Syst Appl 37:8871–8878 33. Chunjiang Z, Xinyu L, Liang G, Qing W (2013) An improved electromagnetism-like mechanism algorithm for constrained optimization. Expert Syst Appl 40(14):5621–5634 34. Pedersen MEH (2010) Good parameters for particle swarm optimization. Technical report HL1001. Hvass Laboratories 35. Wilcoxon F (1945) Individual comparisons by ranking methods. Biometrics 1:80–83 36. Garcia S, Molina D, Lozano M (2008) Astudy on the use of non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a case study on the CEC’2005 special session on real parameter optimization. J Heurist. doi:10.1007/ s10732-008-9080-4

Template matching using an improved electromagnetism-like algorithm Diego Oliva received the B.S. degree in Electronics and Computer Engineering from the Industrial Technical Education Center (CETI) of Guadalajara, Mexico in 2007, the M.Sc. degree in Electronic Engineering and Computer Sciences from the University of Guadalajara, Mexico in 2010. Currently he is a Ph.D. student at the Complutense University of Madrid. His current research interests include computer vision, image processing, artificial intelligence and metaheuristic optimization algorithms.

Erik Cuevas received the B.S. degree with distinction in Electronics and Communications Engineering from the University of Guadalajara, Mexico in 1995, the M.Sc. degree in Industrial Electronics from ITESO, Mexico in 2000, and the Ph.D. degree from Freie Universit¨at Berlin, Germany in 2005. From 2001 he was awarded a scholarship from the German Service for Academic Interchange (DAAD) as fulltime researcher. Since 2007 he has been with University of Guadalajara, where he is currently a fulltime Professor in the Department of Electronics. From 2008, he is a member of the Mexican National Research System (SNI). His current research interests include computer vision and artificial intelligence.

Gonzalo Pajares received M.Sc. and Ph.D. degrees in Physics from UNED (distance University from Spain) (1987, 1995) discussing a thesis on the application of pattern recognition techniques to stereovision. He was working in Indra Space and INTA developing remote sensing applications. He joined the Complutense University of Madrid in 1995 as an associated professor and from 2004 as a professor at full time on the Faculty of Computer Science in the Department of Software Engineering and Artificial Intelligence. The areas covered are: Computer vision, Artificial Intelligence. His current research interests include machine visual perception, pattern recognition and neural networks.

Daniel Zaldivar received the B.S. degree with distinction in Electronics and Communications Engineering from the University of Guadalajara,Mexico in 1995, the M.Sc. degree in Industrial Electronics from ITESO, Mexico in 2000, and the Ph.D. degree from Freie Universit¨at Berlin, Germany in 2005. From 2001 he was awarded a scholarship from the German Service for Academic Interchange (DAAD) as full-time researcher. Since 2006 he has been with University of Guadalajara, where he is currently a Professor in the Department of Computer Science. From 2008, he is a member of the Mexican National Research System (SNI). His current research interest includes biped robots design, humanoid walking control, and artificial vision.