Ear Recognition Using Neural Networks

4 downloads 0 Views 529KB Size Report
Biometrics is being more and more widely used in recently year owing to the irreproducible characteristics of the human body. As one kind of biometrics, the ear ...
Ear Recognition Using Neural Networks Natheer A. AbuJarir Department of Computer Engineering Islamic University of Gaza Palestine, Gaza Strip [email protected] Abstract

II.

In this paper, the basics of using ear as biometric for person identification and authentication are presented. In addition, the error rate and application scenarios of ear biometrics are introduced. it is presented a neural network (NN) to recognize the Ear Recognition . A set of 10 people has been used for experiments having six images for each. The data used are given by IIT Delhi . The correct recognition rate is ranging between 75% and 90 % for artificial neural network matching. It depends on neural network training parameters .

Firstly, we have a enter images. Then, define several processing algorithm to extract the features and attributes from the image and map them into a dataset. Extracted features will have numerical values and will be usually stored in arrays. With these values, neural network can be trained and we can get a good end results. Neural networks can be used, if we have a suitable dataset for training and learning purposes. Datasets are one of the most important things when constructing new neural network[1].

Keywords: Biometrics, Ear Recognition, Neural Networks, Training Parameters

I. Introduction Biometrics is being more and more widely used in recently year owing to the irreproducible characteristics of the human body. As one kind of biometrics, the ear has its own characters: the structure of the ear is rich and stable show in Figure 1 , and does not change radically over time; the ear is less variability with expressions, and has a more uniform distribution of color than faces. These unique characters of the ear make it possible to make up the drawbacks of other biometrics and to enrich the biometrics identification technology[1].

Background

III. Feature Extraction Method A)

Why Feature Extraction in PCA

The original images can not be used directly as the input of the ear recognition module due to the big size 1-D vector come from each 2-D image. The extraction feature should be able to represent the unique identity of each image as well as be in small size. Principle Component Analysis process can generate a set of eigenfaces of a large size of images database. Each image ear in a particular database can be presented by the linearly combination of eigenfaces of that database[7]. B)

PCA Mathematical Principle and Algorithm

1. Import the database into a matrix T, convert each image to a 1 D vector in T. The size of T is a * b, where a is the number of pixels in each ear image, b is the number of images in the database. 2. Get Average ear and ear Difference , Calculate the mean of all ear images in this database to get T_mean(a*1), which represents the average ear of this database. Calculate the difference( subtract T_mean from each image). 3. Calculate Eigenvectors and Eigenvalues of the Covariance Matrix. Fig 1: Ear structure

4. Project the ear Difference to the Subset of Eigenvectors

5.Keep the result of projection to the eigenvectors with the largest S eigenvalues and get the signature vector of each image[7]. C)

Coding Implementation

The code is implemented in matlab due to the convinience of matrix operations in matlab. Function Feature_Extraction_PCA imports all the ear images in the database and exports a matrix( image_sign) of the calculated eigen-vectors of images. Each row represents the eigen vector of each image show code Down. for j=1:8 for k=1:4 if(j==8) temp=M((j-1)*18+1:j*18+4,(k-1)*23+1:k*23); else temp=M((j-1)*18+1:j*18,(k-1)*23+1:k*23); end [u,temp1,v]=svd(temp); temp2=temp1(1,1); feature((i-1)*4+x,(j-1)*4+k)=temp2;

A)

1.

Topology

2.

Architecture: multi-layer feedforward backpro bagation Number of input neurons : 40 for train and 20 for test.

3.

Number of hidden neuron in

hidden layer : 10 by

guesswork. 4.

A Number of output neurons: depends on number of classes (here persons) which are 10.

5.

Training method : log-sigmoid, 'logsig' in matlab [5]

6.

Activation function : scaled conjugate, 'trainscg’ in matlab

7.

Error rate measure : sum of squared error, 'sse' in matlab

8.

Stopping

criteria

:either

error

reaches

0.001or

maximum training epochs reaches 5000.

Table1: The Pseudo Code of Feature_Extraction_PCA

IV.Back propagation Algorithm Back Propagation Neural Network (BPNN) is a multilayered, feed forward Neural Network (NN) and is by far the most widely used. The basic structure of the BPNN includes one input layer, at least one hidden layer (there could be a single layer or multiple layers), followed by output layer. The network receives inputs information by neurons in the input layer and the output information is given by the neurons in the output layer. This allows the network to learn nonlinear and linear relationships between input and output vectors. This means that the interconnected neurons are organized in layers and send their signals forward and then the errors are propagated backward. Back Propagation works by adjusting the weight values during training in order to reduce the error between actual and desire output pattern [1][2]. There are generally four steps in the training process:

Fig 2: feedforward backpro bagation Architecture

B)

Hardware And Software

The our project is implemented in MATLAB 8.1 2013a on a laptop with i7 Intel processor and 8 GB RAM. V. Experimental Results The IIT Delhi Database is provided by the Hong Kong Polytechnic University [6]. It contains ear images that were collected between October 2006 and June 2007 at the Indian Institute of Technology Delhi in New Delhi . A set of 10 people has been used for experiments having six images for each one , 4 images for each one of these samples is trained and 2 images for each one of these samples is tested . There are images of 60 individuals (cases), 9 male and 1 female show finger

1. Assemble the training data. 2. Create the network object. 3. Train the network. 4. Simulate the network response to new inputs. Fig 3: image sample

.

The correct recognition rate is 75% when used Activation function : momentum gradient decent, 'traingdx and 90 % when used Activation function : scaled conjugate, 'trainscg’ for learning rate = .1. VI. Conclusion An ear recognition neural based algoeithm has been presented. A recognition rate between 84.3% and 91.2% has been achieved by performing neural matching. We can conclude that the recognition rate depends on neural network training parameters. The propoesed algorithm can be used efficiently for personal identification. VII. Reference [1] Hazem M. El-Bakry and Nikos Mastorakis, “Ear Recognition by using Neural Networks,” Faculty of Computer Science & Information Systems, Mansoura University, EGYPT [2] Samuel Adebayo Daramola, and Oladejo Daniel Oluwaninyo "Automatic Ear Recognition System using Back Propagation Neural Network", Department of Electrical and Information Engineering, Covenant University Ota Ogun State, Nige ria.

Fig. 4: Back propagation at training. Activation function : scaled conjugate, 'trainscg' Error rate measure : sum of squared error, 'sse'

[3] Automated human identification using ear imaging Ajay Kumar n, Chenye WuDepartment of Computing, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong. [4] Neural Networks: MATLAB examples Neural Networks course (practical examples) © 2012 Primoz Potocnik [5] http://www.mathworks.com/help/nnet/ref/logsig.html . [6]http://www4.comp.polyu.edu.hk/~csajaykr/IITD/Database\_Ear.htm. [7]https://sites.google.com/a/mtu.edu/face-detection-andrecognition/home/feature-extraction-module-bojun/feature-extractionfinal.

Fig. 5: Back propagation at training. Activation function : momentum gradient decent, 'traingdx' Error rate measure : sum of squared error, 'sse' Stopping criterion: minimum gradient reached