person verification using multimodal biometric system

0 downloads 0 Views 399KB Size Report
based on multimodal biometric system integrating palm print hand geometry ... control attendance, financial services, airports, government, military services and ...
Journal of Emerging Trends in Engineering and Management

PERSON VERIFICATION USING MULTIMODAL BIOMETRIC SYSTEM & INFORMATION FUSION Sakshi kapuria

Naresh kumar

ME ECE student, UIET PANJAB UNIVERSITY Chandigarh, India [email protected]

Assistant Prof., UIET PANJAB UNIVERSITY Chandigarh,India [email protected]

extraction, match score, and decision making; and the fusion can occur at any level[8]. According to Sanderson and Paliwal [14] various levels of fusion can be classified into two broad categories: fusion before matching & fusion after matching. Fusion prior to matching includes fusion at the sensor and feature extraction levels and fusion after matching includes fusion at the match score and decision levels.

Abstract—This paper presents a person verification system based on multimodal biometric system integrating palm print hand geometry & face. A uni-modal biometric identification system is often not able to meet the system performance requirements since it suffers from many limitations such as noise in sensed data, intra-class variations, distinctiveness, nonuniversality, spoof attacks etc. So, there is a need to combine multiple traits of a person for efficient performance needs. In this paper, a multimodal biometric system is proposed that integrates three biometric traits namely palm print, face and hand geometry. The verification process is organized as follows: image capture; pre-processing; feature extraction; template matching; score normalization; fusion at score level and finally a decision is made. The system is tested on a database of 100 persons and the experimental results indicates the efficient possibility of the combination.

A.

Data fusion is divided into data level, feature level, matchinglevel , and decision making level[16,17]. Data level fusion is the lowest level of integration and its main advantage is that it can provide information on the minor details which other fusion levels can not do. It directly analyse And integrates the raw data obtained from the sensors. Because of no loss of information , it has a high fusion performance. However , the integration data is too enormous to be dealt with, high cost and longer time to process, which limits its field of application. In the feature level fusion, information is analysed and processed from the data obtained from feature extraction. Fusion features can be a higher dimensional feature vector connected with each group feature vector,& can be a new type of feature vector made up of a group of feature vector. On the contrary, owing to the extracted feature data directly concerned with decision-making analysis, this fusion method can maximally provide the information for decision making analysis. At all fusion levels, the matching level fusion is the most commonly used. Individual characteristics will enter into the recognition model to generate the matching score of each module, and the fusion will integrate all scores into the results. In this work, we are using this level of fusion. The decision making level is a high level fusion. The different features are independently dealt to get a number of results of identification, & then fusion will comprehensively produce them into a final result.

Keywords—face recognition, palmprint recognition, handgeometry recognition, scorelevel Fusion, multimodal biometrics

I.

Information/ Data fusion

Introduction

Biometric identification has the potential of becoming an increasingly powerful tool for public safety. Biometric based person identity verification is attracting a lot of attention, because biometric traits are inherent to the person, which cannot be lost, stolen, shared, or forgotten [1]. However, it is challenged by the imperfect nature of the image data, nonuniversality, intra-subject variation, inter-subject similarities and subject dependent characteristics [8], especially when the biometric data is from a single source. Some of these problems can be relieved by using multimodal biometric systems that consolidate evidence from multiple biometric sources [7]. Generally speaking, there are three major approaches to biometric fusion: multi-modal, multi-sample, and multialgorithm[7]. Multi-modal biometric fusion, where the recognition is performed on multiple biometric samples acquired from different biometric sources of a subject (e.g. face, fingerprint, iris, hand geometry etc.) or from different sensor types (e.g. optical sensor and thermal sensor)[7]. Also, a biometric system often consists of three levels: data/feature 5

Journal of Emerging Trends in Engineering and Management B.

Biometric features

PCA technique is used which transforms the palm print image into specific transformation domains to find useful image information in lower dimensional sub space. It computes a set of basis vector from a set of palm print images, and the images are projected into a lower dimensional subspace to obtain set of coefficients. Test images are then projected onto these basis vectors and matched to find the closest coefficients in the subspace. The basis vectors generated from a set of palm print images are called as Eigenpalm vectors [15]. Recognition is done by projecting a new image into the subspace spanned by the Eigenpalm vectors and based on the minimum Euclidean distance [15] between two palm print feature vectors a matching score is generated denoted as MSpalm.

The main applications of the biometrics have been security areas such as criminal investigation of public security, access control attendance, financial services, airports, government, military services and so on. The use of different modalities are based on their need & performance in specific application.[18][19]. The modalities studied include fingerprint, face, palm print, hand geometry, ear, voice, & 3D face etc. a detailed overview on multi-modal biometric fusion approaches can be found in [7] and [8]. Our work has used face, hand geometry and palm print as inputs to the multi-modal biometric verification system. II.

System description

B.

There are two modes in a biometric- based authentication system [2], namely: enrollment, authentication or identification. In the enrollment mode, all the user’s biometric features (such as, name, gender, hand geometry, face, palm surface etc.) are acquired and stored in a database as a template file. Each template contains feature vectors of hand geometry, face and palm print. In verification mode, the input image is matched to the image stored in the template database and gives the final decision about giving the access to the individual.

Hand geometry Verification Hand geometry is one of the reliable biometric trait used in many applications. Hand geometry verification is one of the simple in terms of its image acquisition and pre processing is concerned.. therefore, we have used it as one of the biometric trait for our system. Once, a high quality image is captured, there areseveral steps required to convert its distinctive features into a compact template this process is known as feature extraction (see fig. 2).

In this paper we proposed a fused palm-face-hand geometry verification system. The three traits are combined at the match score level and final decision about the person as a genuine client or an imposter is made. III.

A.

Image acquisition and Feature extraction

Palm print Verfication

Palm print is one of the relatively new physiological biometric trait due to its stable and unique characteristics The area of the palm is much larger than the area of the finger and as a result, palm-prints are expected to be more distinctive than the fingerprints. Biometric palm print recognizes a person based on the principal lines, wrinkles and ridges on the surface of the palm. These line structures are stable and remain unchanged throughout the life of a person (see Fig. 1).

Fig. 2 Steps for Hand geometry feature extraction

Here, PCA technique is used for extracting useful features from the image. It computes a set of basis vector from a set of hand geometry images, and the images are projected into a lower dimensional subspace to obtain set of coefficients. Test images are then projected onto these basis vectors and matched to find the closest coefficients in the subspace. The basis vectors generated from a set of hand geometry images are called as Eigenhand vectors. Recognition is done by projecting a new image into the subspace spanned by the Eigenhand vectors and based on the minimum Euclidean distance [15] between two hand geometry feature vectors a matching score is generated denoted as MShand which is then passed to the decision module. C. Face Verification To recognize human faces , the prominent characteristics on the face like eyes, nose and mouth are extracted together with

Fig. 1 Steps for palm print feature extraction 6

Journal of Emerging Trends in Engineering and Management their geometry distribution and the shape of the face [6]. A number of face-recognition techniques are proposed, including: • principle component analysis (PCA) [10], [5], [12], • linear discriminant analysis (LDA) [11], • singular value decomposition (SVD) [4], and • a variety of neural network-based techniques [13]. The performance of these approaches is impressive.. Generally, there are two major tasks in face recognition: 1. locating faces in input images and 2. recognizing the located faces. In our system, the eigenface approach is used because: 1. eigenface approach has a compact representation—a facial image can be concisely represented by a feature vector with a few elements,

Fig 3. block diagram of the proposed multimodal biometric verification system

2.

it is feasible to index an eigenface-based template database using different indexing techniques such that the retrieval can be conducted efficiently [11],

. Fusion

A Scores generated from individual biometric traits are combined at matching score level using sum rule. MSpalm, MShand and MSface are the matching scores generated by palmprint, hand geometry and face recognizers respectively. Score normalization is done to map all the scores from different domain into a common domain.. Since the Min-max normalization transforms all the scores into a common range [0, 1] as explained in section I. The normalization of the three scores is done by min-max rule as follows:

3. the eigenface approach is a generalized template matching approach which was demonstrated to be more accurate than the attribute-based approach in one study [3].In this approach a set of images that span a lower dimensionalsubspace is computed. The feature vector of a face image is the projection of the original face image on the reduced eigenspace. The matching score is generated denoted by MSface computing the minimum Euclidean distance between the eigenface coefficients of the template and the detected face.

IV.

െ௠௜௡

‫ܵܯ‬

Npalm=݉ܽ‫ ݈݉ܽ݌ݔ‬െ௠௜௡೛ೌ೗೘

Proposed Multimodal System

‫݈݉ܽ݌‬

Figure 3 shows the block diagram of the proposed multimodal biometric verification system integrating palm print, hand geometry and face. Palm print recognition, All individual modules involve image preprocessing, feature extraction, matching and decision-making. In the operational mode, the three biometric sensors captures the three biometric characteristics individually from the person to be identified and converts them to a raw digital format, which are further processed by the three feature extraction modules individually to produce a compact representation that is of the same format as the templates stored in the corresponding databases taken during the enrollment phase. The three resulting representations are then fed to the three corresponding matching modules. Here, they are matched with templates in the corresponding databases to find the similarity between the two feature sets. The matching scores generated from the individual recognizers are then passed to the fusion module. Finally, fused matching score (MSfinal = MSpalm + MShand + MSface) is passed to the decision module where a person is declared as genuine client or an imposter.

‫ܵܯ‬

െ௠௜௡

Nhand=݉ܽ‫ ݄݀݊ܽݔ‬െ௠௜௡೓ೌ೙೏ ‫݈݉ܽ݌‬

(2)

೓ೌ೙೏

‫ ݂݁ܿܽܵܯ‬െ௠௜௡೑ೌ೎೐

Nface=݉ܽ‫ݔ‬

(1)

೛ೌ೗೘

݂ܽܿ݁ െ௠௜௡೑ೌ೎೐

(3)

where [minpalm , maxpalm] , [minhand,, maxhand] and [minface , maxface] are the minimum and maximum scores for palmprint recognition, hand geometry recognition and face recognition, Npalm,, Nhand and Nface are the normalized matching scores of palmprint, hand geometry and face respectively. Finally, the three normalized similarity scores Npalm , Nhand and Nface are fused using sum rule [16] to generate final matching score as follows: MSfinal =A* Npalm + B* Nhand + C* Nface (4) where A, B, C are the weights assigned to the three biometric traits. The final matching score MSfinal is passed to the decision module and is compared against a certain threshold value to recognize the person as genuine or an imposter.

7

Journal of Emerging Trends in Engineering and Management

B.

Fig 4. ROC curve

Weight assumption for the individual biometric trait

Each biometric matcher produces a match score based on the comparison of input feature set with the template stored in the database. These scores are weighted according to the biometric traits used for increasing the influence of more reliable traits and reducing the importance of less reliable traits. Weights indicate the importance of individual biometric matchers in a multibiometric framework. The set of weights are determined for a specific user such that the total error rates corresponding to that user can be minimized. Biome tric traits Palm print

Univ ersal ity M

Distincti veness H

Per men ance H

Colle ctibili ty M

Perfo rman ce H

Face Hand geome try

Acceptab ility

H

L

M

H

L

H

M

M

M

M

H

M

M

M

M

Circu mv ention L

Since from table I obtained from [8] it is clear that palmprint biometric trait is more reliable than hand geometry and face biometric traits. Hence, in the proposed work, we increase the weight of palmprint and accordingly decrease the weights of hand geometry and face. We have assumed the value of weights A,B,C as 0.5, 0.1 and 0.4 respectively and computed final matching score MSfinal by using (7) with these weights andplotted the ROC (Receiver Operating Characteristic) curve as shown in Figure 4.

V.

Fig 5. ROC curve taken from [21]

From the graph fig 4; it is clear that integration/ fusion of multiple biometric traits gave far more better performance than uni-modal biometric traits. When all the three traits are combined together at score level fusion the performance gain jumped to about 93% and it goes to 100% as the FAR(%) is increased from 0 to 10(%). The performance of our proposed system is compared with the system proposed by [21] which uses face, fingerprint & palmprint as the biometric traits for verification. Experimental results in the range of 0-10(%) of FAR are compared as shown by colored portion of fig 5. it can be observed from fig 4 & fig 5 that our proposed system gave GAR Of 93 to 100% as the FAR increases from 0 to 10% while the performance of the system proposed by [21] gave GAR of 95 to 96%. So, the result comparison shows that our system gave constantly increasing performance to about 100% while the latter system gave performance of about 96% when the comparison is made in the same range of FAR.

Experimental results

The experiments discussed in this paper are realized using Matlab R2009b language. The hand geometry images and palm print images are taken from CASIA database and Face images are obtained from digital camera having resolution of 3 megapixels. There are 100 samples. For each person, we take 10 images. In the enrollment phase, all images temples are stored in our database. We randomly chose five images per person for training and the rest of them for test. All the images are of fixed size of 200*180 pixels. The experiment processed as following: for the ith person, his face , his corresponding palmprint and hand-geometry are selected, comparing with the ith template then the results are recorded.

8

Journal of Emerging Trends in Engineering and Management [17] Durrant W. Sensor models and multisensor integration. The International Journal of Robotics Research, 1988, 7(6): 97~113. [18] Shahram Latifi and Nimalan, Solayappan. A Survey of Unimodal Biometric Methods. Security and Management 2006: Las Vegas, Nevada USA. 57~63. [19] Tony Mansfield, Gavin Kelly, David Chandler, Biometric Product Testing Final Report. 2001, Center for Mathematics and Scientific computing National Physical laboratory. [21] Sheetal chaudhary, Rajender Nath. A Multimodal Biometric Recognition Syatem Based on Fusion of Palmprint, Fingerprint and Face. International Conference on Advances in Recent technologies in Communication and computing pp 596-600, IEEE 2009.

References [1]

[2] [3]

[4] [5]

[6]

[7]

[8]

[9]

[10]

[11]

[12] [13]

[14] [15]

[16]

A.K. Jain, A. Ross , and S. Prabhakar. An Introduction to Biometric Recogntion .IEEE Transactions on Circuits and Systems for Video Technology, Special Issue on Image- and Video-Based Biometrics, 14(1):4-20, January, 2004 Arun Ross, Anil Jain. Information fusion in biometrics. Pattern Recognition Letters24 (2003) 2115-2125 R. Brunelli and T. Poggio, “Face Recognition: Features Versus Templates,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 10, pp. 1,042-1,052, Oct. 1993. Z. Hong, “Algebriac Features Extraction of Image for Recognition, “ Pattern Recognition, vol. 24, no. 2, pp. 211-219, 1991. M. Kirby and L. Sirovich, “Application of the Karhunen-Loeve Procedure for the Characterization of Human Faces,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 103-108, Jan. 1990. J. Kittler, Y. Li, J. Matas, and M.U. Sanchez, “Combining Evidence in Multimodal Personal Identity Recognition Systems,” Proc. First Int’l Conf. Audio Video-Based Personal Authentication, pp. 327-334, CransMontana, Switzerland, Mar. 1997. K. W. Bowyer, K. I Chnag, P Yan, P.j. Flynn, E. Hansley, and S Sarkar. Multi-modal biometrics: An overview. Second Workshop on MultiModal User Authentication, 2006. A. Ross and A. K. Jain Multimodal biometrics: An overview. In Proc. of The Twelfth European Signal Processing Conference, pages 1221-1224, 2004. L. Hong, A. K. Jain, S. Pankanti. Can Multibiometrics improve Performance? In proceedings of IEEE Workshop on Automatic Identification Advanced Technologies, pages 59-64, New Jersey, U.S.A., October 1999. L. Sirovich and M. Kirby, “ Low Dimensional Procedure of Characterization of Human Faces,” J. Optical Soc. Am., vol. 4, no. 3, pp. 519-524, 1987. D.L. Swets and J. Weng, “ Using Discriminant Eigen Features for Image Retieval,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 8, pp. 831-836, Aug. 1996. M. Turk and A. Pentland, “Eigen faces for Recognition,” J. Cognitive Neuroscience, vol. 3, no. 1, pp. 71-86, 1991. D. Valentin, H. Abdi, A.J. O’Tooli, and G. Cottrell, “ Connectionist Models of Face Processing: A Survey,” Pattern Recognition, vol. 27, no. 9, pp. 1,209-1,230, 1994. C. Sanderson and K. K. Paliwal, Information Fusion and Person Verification using Face and Speech, Information. 02-33, IDIAP, 2002. Jiuqiang Han, Fenghua Wang, Robust Multimodal biometric Autehntication integrating iris, face and palmprint, Information Technology and Control, 2008. ,Vol. 37, No. 4. Havid L H, James. An introduction to multi-sensor Data Fusion. Proceedings of the IEEE, 1997, 85(1): 6~23.

[20] www.wikipedia.com

9