Intramodal Palmprint Authentication - CiteSeerX

3 downloads 0 Views 255KB Size Report
lines on the palm namely, heart line, head line and life line. Wrinkles are the thinner lines concentrated all over the palm. A palmprint image with principal lines ...
Intramodal Palmprint Authentication Munaga. V. N. K. Prasad1, P. Manoj1, D. Sudhir Kumar1 and Atul Negi2 1 IDRBT, Castle Hills, Road No 1, Masab Tank, Hyderabad. [email protected], [email protected], [email protected] 2 Dept. of CIS, University of Hyderabad, Hyderabad [email protected]

Abstract. Palmprint technology is one of the biometric techniques used to identify an individual. Recognition of palmprints is based on the features like palm lines, texture, ridges etc. Several line and texture extraction techniques have already been proposed. In this paper we propose a novel technique, filiformity to extract the line like features from the palm. We also extracted texture features using Gabor filter from the palmprint image. Performance of the two techniques is determined individually. A sum rule is applied to combine the features obtained from the two techniques to develop an intramodal system. Fusion is applied at both feature level as well as matching level for the authentication mechanism. Performances of the system improved both in False Acceptance Rate (FAR) as well as Genuine Acceptance Rate (GAR) aspects in the intramodal system. Keywords: Palmprint, Filiformity, Gabor filter, FAR, GAR, Intramodal.

1 Introduction Biometrics is considered to be one of the robust, reliable, efficient, user-friendly secure mechanisms in the present automated world. Biometrics can provide security to a wide variety of applications including access to buildings, computer systems, ATMs [1]. Fingerprints, Iris, Voice, Face, and Palmprints are the different physiological characteristics used for identifying an individual. A palm is an inner surface of the hand between the wrist and the fingers [2]. Palm has several features to be extracted like principal lines, wrinkles, ridges, singular points, texture and minutiae. Principal lines are the darker line present on the palm. There are 3 principal lines on the palm namely, heart line, head line and life line. Wrinkles are the thinner lines concentrated all over the palm. A palmprint image with principal lines and wrinkles represented is shown in Fig 1. Different line extraction techniques are already proposed which includes edge detection techniques, line tracing techniques. Wu et al [2] used directional line detectors to extract the principal lines. Chih -Lung et al. [9] used Sobel edge detection and performed directional decomposition. Han et al.,[4] used morphological operations to enhance the palm lines and employed the magnitude of the image to compute the line like features. Wu. et al.,[5] performed morphological operations to extract the palm lines explicitly and employed coarse to fine level strategy.

2-9525435-1 © SITIS 2006

- 425 -

Algorithms such as the stack filter [10] are able to extract the principal lines. However, these principal lines are not sufficient to represent the uniqueness of each individual’s palmprint because different people may have similar principal lines on their palmprints [7].

Fig 1. Palmprint with principle lines and wrinkles

This paper presents the extraction of features like palm lines and texture present on the palm. Filiformity [6] technique is used for extracting the line features. This technique can extract the lines even from the low contrast images. Texture features are extracted using Gabor filter technique [7]. Two fusion strategies are employed on the features extracted using filiformity and Gabor filter techniques. The rest of the paper is organized as follows. Section 2 deals with the image acquisition, preprocessing and segmentation. Section 3 deals with filiformity technique of line extraction. Texture feature extraction is discussed in Section 4. Section 5 presents the feature extraction technique and Section 6 determines the matching technique used. Experimental results are provided in Section 7. Information fusion is discussed in Section 8 and Section 9 gives the conclusions.

2 Image acquisition, Pre Processing and Segmentation Any biometric recognition system has to undergo four stages like Image Acquisition, Pre-Processing and Segmentation, Feature extraction and Matching. The basic setup of the verification system is depicted in Fig 2. Palmprints of an individual can be captured in different ways like inked image [3], scanned images with pegged setup [7] and scanned images with peg free setup [4]. In our paper we worked on scanned images with pegged setup as shown in Fig 3.

Fig 2. Palmprint verification system block diagram

The palmprint obtained in image acquisition is processed to smoothen and reduce the noise in the image. Adaptive median filter [8] is used to smoothen the image before extracting the ROI and its features. The filter has the advantages of handling impulse noises of large spatial densities. An additional benefit of the adaptive median

- 426 -

filter is that it seeks to preserve detail while smoothing non-impulse noise, something that the traditional filter does not do.

Fig 3. Palmprint image captured

(a)

(b)

(c) Fig 4 (a) Border pixels collected and Wm point is shown (b) Distance Distribution diagram plotted for the distance between Wm and border pixels (c) Finger Web locations Fw1, Fw2, Fw3 are found and a square region is extracted.

The images captured are to be segmented to extract the Region of Interest (ROI). The image is rotated 90° in clockwise direction as shown in Fig 4. We followed the

- 427 -

technique used by [9] to extract the square ROI from the palmprint image captured. The starting point of the bottom line Ps is found out by scanning the image from the bottom left most pixel. The boundary pixels of the palm are traced and collected in a vector, namely Border Pixel Vector as shown in Fig 4(a). The mid point of the bottom line Wm is found out and a distance to all the border pixels from Wm is calculated. A distance distribution diagram is plotted as shown in Fig 4(b). The local minima in the distance distribution diagram are found out, which are nothing but the finger web locations. A square region is extracted using the finger web locations as shown in Fig 4(c). As the size of the square region differs from person to person, we resized the ROI to 150x150 size.

3. Filiformity Salim Djeziri et al., [6] proposed a method that is based on the human visual perception, defining a topological criteria specific to hand written lines which is called ‘filiformity’. This method was proposed for the extraction of signatures from check images. Filiformity is a pixel level operation. This measure provides a local value, which is then processed by a global thresholding, taking into account the information about the whole image. As far as our problem is concerned, we are only interested in the lines present on the palmprint image. Line is characterized by means of the following heuristic: A line is generally darker than the background on which it is drawn. In our approach we use ring level filiformity because it is richer in terms of information and enables better interpretation of the local shape of a line by an adequate choice of a measurement function, while the surface measure translates an occupation rate without taking into account the local shape of the line. We do not perform global processing after the local values obtained because we are interested in all the lines present on the palm. Global Processing is performed to extract the lines whose pixels have the local measure more than the specified preset threshold value. For every pixel on the palmprint image a degree of perceptibility is computed. This measure is used to build the feature vector. Filiformity technique is applied on the 150X150 size ROI and local measures are obtained for each pixel. Fig 5 shows the ROI after the application of filiformity technique.

Fig 5. ROI after the application of filiformity

- 428 -

4. Gabor filters for texture extraction Palmprint have unique features like principal lines, wrinkles, and ridges. Principal lines extracted from the palmprint may not be sufficient to represent uniqueness of the individual, because of the similarity of the principal lines of different person’s palms [7]. So to improve the uniqueness and make the feature vector much robust, we tried to extract the texture features from the images. These texture features not only include principal lines but also include wrinkles, ridges. We have used Gabor filter to extract the texture features. These Gabor filters already have been used to extract the features from fingerprint, iris recognition [11], and palmprint recognition [7] [12]. Gabor filters extracts the texture features by capturing the frequency and orientation information from the images. The 2-D Gabor filters used for palmprint verification in spatial coordinates are defined as ⎧ x2 + y2 ⎫ 1 G ( x, y ,θ , u , σ ) = ( ) * exp⎨− * exp{2 * π * i * u ( x cosθ + y sin θ )} (1) 2 2 ⎬ 2πσ ⎩ 2σ ⎭

Where ‘x’ and ‘y’ are the coordinates of the filter, ‘u’ is the frequency of the sinusoidal wave, ‘ σ ’ is the Gaussian envelope, ‘ θ ’ is the orientation of the function and i= − 1 . Here, the optimized values for the Gabor filter parameters such as u=0.0925 and σ =7.1 have chosen after testing with different values at an orientation of 45°. w w

Iθ (i, j ) = ∑ ∑ Gθ ( x, y ) I (i − x, j − y ) x =1 y =1

where I is the input image, I θ is the image at

θ

(2)

orientation, and w × w is the size of

the Gabor filter mask. Gabor filter is applied on 150x150 size ROI as done in filiformity technique.

5. Feature Extraction After extracting the reliable features from the palmprint image, we need to build a feature vector to represent the individual. In our approach we divided the extracted ROI into ‘n’ non-overlapping blocks and calculated the standard deviation of the local values obtained in both the cases of line and texture extraction. A feature vector of size 1x n is established with the computed standard deviation values. FV=[SD (1), SD (2), …….SD (n)] where FV is the feature vector SD (j) is the standard deviation of the jth block.

- 429 -

6. Matching Matching algorithm determines the similarity between two given data sets. Applying the matching algorithm on the input palmprint image and image existing in the database does palmprint verification. The palmprint is said to be authentic if the result obtained after matching is more than the preset threshold value. In our approach we employed Pearson Correlation Coefficient to find the similarity between two palmprint images. The linear or Pearson correlation coefficient is the most widely used measurement of association between two vectors. Let x and y be n-component vectors for which we want to calculate the degree of association. For pairs of quantities (xi, yi), i=1,…,n the linear correlation coefficient r is given by the formula: n

r=

∑ ( xi − x)( yi − y ) i =1

n

n

i =1

i =1

∑ ( xi − x ) 2 ∑ ( yi − y ) 2 where x is the mean of the vector x

y is the mean of the vector y The value r lies between –1 and 1, inclusive, with 1 meaning that the two series are identical, 0 meaning they are completely independent, and -1 meaning they are perfect opposites. The correlation coefficient is invariant under scalar transformation of the data (adding, subtracting or multiplying the vectors with a constant factor).

7. Experimental Results We experimented our approach on Hong Kong PolyTechnic University Palmprint database [13]. A total of 600 images are collected from 100 different persons. Images are of 384x284 size taken at 75dpi resolution. The images, which are segmented to ROI, are resized to 150x150 size so that all the palmprints are of same size. We applied filiformity as well as Gabor filters on the ROI independently. The resultant image is then segmented into 36 non-overlapping square blocks as shown in Fig.6. Standard deviation of the values obtained is computed for each block and a feature vector of size 1x36 is built for both the cases. The feature vectors for the six images of a person are stored in the database for his identity. FAR and GAR rates are calculated for both the cases to evaluate the performance of the system [14]. A table is built for each person in the database as shown in Table1 and Table 2. Table1 represents the correlation values of the different images that belong to the same person, which is calculated using filiformity method. Table 2 represents the correlation values of the different images that belong to the same person and is calculated using Gabor filters. The database is tested for different threshold values to calculate the GAR. A person is said to be genuine if at least one value in the table is above the threshold. FAR is calculated by matching each image of a person with all the images of the remaining persons in the database and the results are given for different threshold values. Table 3 and Table 4 presents the GAR and FAR at

- 430 -

different threshold values for Filiformity and Gabor filter techniques respectively. Correlation between same images always yields a matching score of one.

Fig 6. ROI divided into 36 blocks

Table 1.Matching scores for different images of a person computed using Filiformity technique

1 2 3 4 5

2 0.8792

3 0.8189 0.8672

4 0.8412 0.8277 0.8070

5 0.8173 0.8910 0.7543 0.8583

6 0.8485 0.8007 0.7663 0.9187 0.9028

Table 2.Matching scores for different images of a person computed using Gabor filter

1 2 3 4 5

2 0.7202

3 0.7557 0.6362

4 0.6333 0.4547 0.7418

5 0.6957 0.6186 0.7757 0.7623

6 0.6152 0.4894 0.8193 0.7986 0.8418

Table 3. GAR and FAR rates for different threshold values for Filiformity technique Threshold 0.84 0.88 0.89 0.9 0.92

GAR 100% 96% 95% 89% 74%

- 431 -

FAR 0.22 0.024 0.011 0.0033 0

Table 4. GAR and FAR rates for different threshold values for Gabor filter technique Threshold 0.84 0.88 0.89 0.9 0.92

GAR 100% 99% 98% 98% 93%

FAR 6.17 1.54 0.96 0.57 0.12

Using filiformity technique the FAR rates are better than GAR rates, which implies that a genuine person may be denied but an impostor is not allowed. In case of Gabor filter technique GAR rates are better than FAR rates, which implies that a genuine person should not be denied even though the impostor can be allowed.

8. Information Fusion Fusion of the palmprint representations improves the performance of the verification system [15]. The representations can be combined at different levels: at feature level, at score level, and at decision level. In this paper we combined the representations using a sum rule at both feature level and score level as shown in Fig.7 and Fig.8 respectively. According to the experiments conducted by Arun Ross et al., [16] the sum rule performs better than the decision tree and linear discriminant analysis. FAR and GAR rates are calculated from the feature vectors obtained using fusion at feature level and the results are shown in Table 5. Table 5. GAR and FAR rates at feature level fusion Threshold 0.8 0.81 0.82 0.83 0.84

GAR (%) 100 98 96 96 95

FAR (%) 0.318 0.218 0.142 0.085 0.050

Table 6. GAR and FAR rates at matching level fusion Threshold 1.68 1.76 1.78 1.8 1.84

GAR (%) 100 97 97 96 88

- 432 -

FAR (%) 0.32 0.25 0.249 0.248 0.248

We also performed fusion at matching level by applying sum rule on the matching scores obtained from the two different techniques and calculated GAR and FAR. Results are shown in Table 6. By combining, the two techniques using fusion strategies the performance is improved in both GAR as well as FAR cases.

Fig 7. Block diagram for feature level fusion

Fig 8. Block Diagram for score level fusion

9. Conclusions The objective of this work is to investigate the performance of the intramodal palmprint system by extracting reliable features from the palmprint. We extracted line like features as well as texture features from the palmprint image. Performance rates like GAR and FAR are calculated for each technique. In case of filiformity technique the FAR rates are better than GAR rates, which implies that a genuine person may be denied but an impostor is not allowed. In case of Gabor filter technique GAR rates are better than FAR rates, which implies that a genuine person should not be denied even though the impostor can be allowed. By combining, the two techniques using fusion strategies the performance is improved in both GAR as well as FAR cases. While GAR is increasing FAR is decreasing by the application of fusion strategy. This concludes that using multiple features for a single biometric yields better results when compared to using single feature.

- 433 -

References 1. 2. 3. 4. 5. 6.

7.

8. 9.

10. 11.

12. 13. 14.

15. 16.

Biometrics: Identity Verification in a Networked World. S. Nanavati, Michael Thieme, and Raj Nanavati, eds. John Wiley & Sons 2002. X. Wu, David Zhang, K. Wang, Bo Huang,” Palmprint classification using principal lines”, Pattern Recognition 37, 2004, pp.1987- 1998. D. Zhang, W. Shu, “Two novel characteristics in palmprint verification: datum point invariance and line feature matching”, Pattern Recognition 32, 1999, pp.691-702 C.C. Han, H. L. Chen, C.L. Lin, K. C. Fan, “Personal authentication using palmprint features, Pattern Recognition 36, 2003, pp. 371-381. X. Wu, K. Wang, ”A Novel Approach of Palm-line Extraction”, Proceedings of the Third International Conference on Image and Graphics (ICIG’04), 2004, pp 230-233. R. Plamondon, S. Djeziri, F. Nouboud, “Extraction of signatures from check background based on a filiformity criterion”, IEEE Transactions on Image Processing, Vol.7, No.10, October 1998,pp 1425-1438. D. Zhang, Wai-Kin Kong, Jane You and M.Wong, “Online palmprint Identification”, IEEE Trans. On Pattern Analysis and Machine Intelligence, Vol.25, No.9, Sep 2003, pp 1041-1050. Digital Image Processing, Rafael C. Gonzalez and Richard E. Woods eds. Low Price Edition 2nd edition 2003. Chih-Lung Lin, Thomas C. Chuang, Kuo-Chin Fan,” Palmprint Verification using hierarchical decomposition”, Pattern Recognition 38, No. 12, December 2005, pp. 26392652. Paul S. Wu, Ming Li, ”Pyramid edge based on stack filter”, Pattern Recognition 18,1997, pp.237-248. John G. Daugman,” High Confidence Visual Recognition of Persons by a Test of Statistical Independence” IEEE Trans. On Pattern Analysis and Machine Intelligence, Vol.15, No.11, Nov 1993, pp 1148-1161. Wai Kin Kong, David Zang, “Palmprint Texture Analysis based on low resolution images on personal authentication”, ICPR 2002,pp 807-810. “http://www.comp.polyu.edu.hk/~biometrics", PolyU Palmprint Database, Hong Kong PolyTechnic University. M.Golfarelli, Daraio Maio, D. Maltoni, “ On the Error-Reject Trade-Off in Biometric Verification Systems”, IEEE Trans. On Pattern Analysis and Machine Intelligence, Vol.19, No.7, Nov 1997,pp 786-796. Ajay Kumar, D. Zhang, “ Personal authentication using multiple palmprint representations”, Pattern Recognition 38, No. 10, October 2005, pp. 1695-1704. Arun Ross, Anil Jain,” Information fusion in biometrics”, Pattern Recognition Letters 24, 2003, pp. 2115-2125.

- 434 -