An improved BioHashing for human authentication - CiteSeerX

48 downloads 127809 Views 351KB Size Report
Given the recent explosion of interest in human authentication, verification based on tokenized ..... forgeries per signer—forgers are provided the signature.
Pattern Recognition 40 (2007) 1057 – 1065 www.elsevier.com/locate/pr

An improved BioHashing for human authentication Alessandra Lumini, Loris Nanni∗ DEIS, IEIIT – CNR, Università di Bologna, Viale Risorgimento 2, 40136 Bologna, Italy Received 3 November 2005; received in revised form 24 March 2006; accepted 25 May 2006

Abstract Given the recent explosion of interest in human authentication, verification based on tokenized pseudo-random numbers and the user specific biometric feature (BioHashing) has received much attention. These methods have significant functional advantages over sole biometrics i.e. zero equal error rate. The main drawback of the base BioHashing method proposed in the literature relies in exhibiting low performance when an “impostor” B steals the pseudo-random numbers of A and he tries to authenticate as A. In this paper, we introduce some ideas to improve the base BioHashing approach in order to maintain a very low equal error rate when nobody steals the Hash key, and to reach good performance also when an “impostor” steals the Hash key. 䉷 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. Keywords: Biometric verification; BioHashing

1. Introduction The main advantage of biometrics is that it bases recognition on an intrinsic aspect of a human being and the usage of biometrics requires the person to be authenticated to be physically present at the point of the authentication. High false rejection of valid users, which results in low false acceptation, is often largely neglected in the evaluation of biometric systems. Denial of access in biometric systems greatly impacts on the usability of the system by failing to identify a genuine user, and hence on the public acceptance of biometrics in the emerging technology. Multimodal biometrics can reduce the probability of denial of access without sacrificing the false acceptation performance. In Ref. [1] the authors, in order to solve the problem of high false rejection, proposed a novel two-factor authenticator based on iterated inner products between tokenized pseudo-random number (generated by an Hash key) and the user specific fingerprint features; in this way, a set of user specific compact codes can be produced which is named “BioHash code”. ∗ Corresponding author. Tel.: +39 3493511673.

E-mail address: [email protected] (L. Nanni).

Direct mixing of pseudo-random number and biometric data is an extremely convenient mechanism with which to incorporate physical tokens, such as a smart card or an USB token. The main drawback of this method, which has been applied to various biometric characteristics (face, fingerprint, palmprint) [1–5], is the low performance when an “impostor” B steals the Hash key or the pseudo-random numbers of A and tries to authenticate as A. When this problem occurs, the performance of BioHashing can be lower than that obtained using only the biometric data [6]. In a recent paper [7] the authors put in evidence the anomalies of the base BioHashing approach and conclude that the claim of having achieved a zero EER is based upon the impractical hidden assumption of no stealing of the Hash key. Moreover, they proved that in a more realistic scenario where an impostor steals the Hash key the results are worse than when using the biometric alone. Anyway no suggestion of possible solutions have been given. In this paper, we propose an improved BioHashing approach which results more robust than the base method also in the worst test case (when always an “impostor” steals the Hash key). After an accurate experimental analysis we find out the best weakness of the base approach in the length of

0031-3203/$30.00 䉷 2006 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved. doi:10.1016/j.patcog.2006.05.030

1058

A. Lumini, L. Nanni / Pattern Recognition 40 (2007) 1057 – 1065

the BioHash code (which is bounded by the dimension of the feature space) and we propose some ideas to overcome this problem. The paper is organized as follows: in Section 2 we briefly review the basic BioHashing method, in Section 3 we present our experimental findings about the base approach and the ideas upon which our improved BioHashing is founded, in Section 4 we detail the new approach and in Section 5 we present and discuss some experimental results. Finally we draw our conclusion in Section 6.

extraction procedure, e.g. n DCT coefficients with higher variance from a face image) is reduced down to a bit vector b ∈ {0, 1}m with m the length of the bit string (mn), via a uniform distributed pseudo-random numbers generated by giving a secret seed (Hash key). The algorithm for the creation of the vector b can be detailed as follows: (1) Given a secret seed K (the Hash key), generate a sequence of real numbers to produce a set of pseudorandom vectors ri ∈ Rn , i =1, . . . , m. Since the vectors have to be a basis of the space, we control that they are linearly independent, eventually discarding wrong ones. A variety of pseudo-random bit/number algorithms are publicly available, we adopt the Blum–Blum–Shub method [8]. (2) Apply the Gram–Schmidt ortho-normalization procedure to transform the basis ri into an orthonormal set of vectors ori , i = 1, . . . , m.

2. Base BioHashing BioHashing generates a vector of bits starting from the biometric feature set and a seed which represents the “Hash key” (see Fig. 1). The base procedure, presented in Ref. [1] and illustrated in Fig. 2, operates as follows. The biometric vector data x ∈ Rn (extracted from the selected biometrics by a feature

Hash key K ⎡ x1 ⎤ ⎢ ⎥ ⎢x 2⎥ ⎢ ... ⎥ ⎢ ⎥ ⎣x n⎦

Feature Extraction (e.g DCT coefficients)

Biometric characteristic (e.g. a face image)

b1 b2

BioHashing

... bm

Feature vector x ∈ℜ n

bit vector b ∈ {0,1}

m

Fig. 1. The BioHashing idea.

⎡ x1 ⎤ Feature vector ⎢ ⎥ x ∈ℜ n ⎢ x2 ⎥ ⎢ ... ⎥ ⎢ ⎥ ⎣ xn ⎦

Hash key

K Random generation (e.g. by the BlumBlum-Shub m ethod)

⎡ r1,1 ⎤ ⎡ rm,1 ⎤ ⎥ ⎢r ⎥ ⎢ ⎢ 1, 2 ⎥ .... ⎢ rm, 2 ⎥ ⎢ ... ⎥ ⎢ ... ⎥ ⎥ ⎢ ⎥ ⎢ ⎢⎣r1, n ⎥⎦ ⎢⎣rm, n ⎥⎦ m linearly independent random vectors n ri ∈ℜ i=1..m

Gram-Schmidt ortho-normalization

⎡ or1,1 ⎤ ⎡ orm,1 ⎤ ⎥ ⎢or ⎥ ⎢ ⎢ 1, 2 ⎥ .... ⎢ orm, 2 ⎥ ⎢ ... ⎥ ⎢ ... ⎥ ⎥ ⎥ ⎢ ⎢ ⎢⎣or1,n ⎥⎦ ⎢⎣orm, n ⎥⎦

Scalar product and threshold binarization

⎧⎪0 if x ri ≤ τ ⎫⎪ bi = ⎨ ⎬ ⎪⎩1 if x ri ≤ τ ⎭⎪

BioHash code: bit vector

an or thonormal set of vectors ori ∈ℜ n i=1..m

Fig. 2. The base BioHashing method.

⎡ b1 ⎤ ⎢ ⎥ ⎢ b2 ⎥ ⎢ ... ⎥ ⎢ ⎥ ⎣b m ⎦

m b ∈}0,1}

B IOHASHING

A. Lumini, L. Nanni / Pattern Recognition 40 (2007) 1057 – 1065

1059

18

60

16

50

14 12

40

10

30

8 6

20

4

10

2 300

0

250

100

200

90

150

80

100

70

50

60

-50

50

-100

40

-150

30

-200

20

-250

10

-300

0

0

τ

m

Fig. 3. EER obtained by the base BioHashing method varying the parameter m: the test problem is face verification on the ORL data set represented by the 100 DCT coefficients with higher variance (see Section 5). This test has been conducted in the worst hypothesis: always an impostor steals the Hash key.

Fig. 4. EER obtained by the base BioHashing method varying the parameter  in the range [−300, 300] step of 50. The test problem is face verification on the ORL data set represented by the 100 DCT coefficients with higher variance. This test has been conducted in the worst hypothesis: always an impostor steals the Hash key.

(3) Compute the inner product between the biometric feature vector x and ori (x|ori ), i = 1, . . . , m and compute bi (i = 1, . . . , m) as   0 if x|ori  bi = , 1 if x|ori  > 

changing . Moreover, the range of variation of  is application dependent, since it is determined by the domain of the biometric vectors (e.g. for the face verification problem cited above  can reasonably range between −300 and 300).

where  is a preset threshold. The Hash key K is given to a user during the enrollment, and is different among different users and different applications. The resulting bit vector b, that we name “BioHash code”, is compared by the Hamming distance for the similarity matching. 3. Error analysis and rationale behind the new approach An experimental study helped us to identify some points of weakness of the base BioHashing method: • the parameter m is critical to maximize the performance. The performance improves by increasing m [6]: an experimental validation of this conjecture is reported in Fig. 3, where a comparison among performance obtained by a base BioHashing method as a function of m is reported. Yet, the parameter m cannot be increased at will, since it is bounded by the intrinsic dimension n of the biometric feature, which cannot be augmented without including useless data or noise (e.g. if we extract the discrete cosine transform coefficients with higher variance for face verification a reasonable value for n is 100); • differently from what assumed by previous works [1,2], the performance of a BioHashing system is dependent on the value of the parameter . Our preliminary tests (reported in Fig. 4) show a high variance of results by

Our ideas to boost the performance of the base BioHashing approach are based on these two considerations: (1) the system security improves by increasing the dimension of the Hash code, (2) base BioHashing is an instable classifier. The first statement is clear and our solutions are proposed and explained in Section 4. The second statement needs more explanations. The instability of classification algorithms means that changes in input training samples or parameters of the classifier may cause dramatical changes in output classification. Several solutions have been discussed in the field of multiclassifiers to deal with this problem, and some works [9–11] have shown that an ensemble of classifiers (based aggregating methods as on bagging, boosting or random subspace) may dramatically reduce the error rate. We investigated the relationship between parameters changing and output changing using the average Q-statistic [10] among the classification results obtained varying the parameter  (in a biometric verification problem, the two classes are: genuine and impostor). Yule’s Q-statistic is a measure for evaluating the independence of classifiers: for two classifiers Di and Dk the Q-statistic is defined as Qi,k =

ad − bc , ad + bc

where a is the probability of both classifiers being correct, d is the probability of both classifiers being incorrect, b is the probability first classifier is correct and second is incorrect,

1060

A. Lumini, L. Nanni / Pattern Recognition 40 (2007) 1057 – 1065

c is the probability second classifier is correct and first is incorrect. Qi,k varies between −1 and 1 and it is 0 for statistically independent classifiers. We tested the face verification problem (ORL data set represented by the 100 DCT coefficients with higher variance) where  range between −300 and 300 (the same experiment reported in Fig. 4). The average Q-statistic among these results is 0.73. This value is quite low, confirming that, on average the classifiers are not much correlated, therefore the instability of classification approach by varying the parameter  is high. We suggest to exploit this property to improve the classification performance by adopting a fusion of classifiers: in fact, it is well known in the literature that combining “independent” classifiers [10,12] permits to dramatically reduce the error rate obtained by a “stand-alone” classifier. It is interesting to note, see Fig. 4, that the classifiers obtained varying  reach similar performance and are sufficiently independent each other, for these reasons we argued that a fusion may be useful to drastically decrease the error rate. As a further experiment, we calculated, in the above face verification problem, the average Q-statistic among k = 15 classifiers obtained with the same value of  but a different Hash key; these classifiers have an average Q-statistic of 0.8, which confirms the instability of the method also without varying the parameter .

4. Improved BioHashing To deal with the problems reported in the previous section and to make the BioHashing procedure more robust against the possible stealing of the Hash key by an impostor, we propose several solutions leading to an improved version of the BioHashing method (Fig. 5): • NORMALIZATION: We normalize the biometric vectors by their module before applying the BioHashing procedure, such that the scalar product x|ori  is within the range [−1, 1]; •  VARIATION: Instead of using a fixed value for  we use several values for  and we combine with the “SUM rule” the scores obtained varying  between max and min , with p steps of step = (max − min )/p; • SPACES AUGMENTATION: Since the dimension of the projection space m cannot be increased at will, we suggest to use more projection spaces to generate more BioHash codes per user. Let k be the selected number of projection spaces to be used, the BioHashing method is iterated k time on the same biometric vector in order to obtain k bit vectors bi , i =1, . . . , k. Then the verification is carried out by combining the classification scores obtained by each bit vector (BioHash code). The random generation can be performed in an iterative manner, thus requiring a single Hash key K: in such a way that the random generator is not reinitialized by a new key until the complete generation of the k bases is not performed.

• FEATURES PERMUTATION: Another way to generate more BioHash codes, without creating more projection spaces, is to use several permutation of the feature coefficients in x during the projection calculation: we use q permutations of x obtained by round-shifting the coefficients of a fixed amount thus obtaining q bit vectors. As above the verification is carried out by combining the classification scores obtained by each bit vector. The result of our improved BioHashing procedure, if all the above solutions are exploited, is a set of k · p · q BioHash codes bi , which are compared by the Hamming distance. The verification task is performed by training a classifier for each BioHash code and finally by combining these classifiers by a fusion rule (we suggest the SUM rule). 5. Experiments We test our method on several biometric characteristics, in order to evaluate how the performance depends on the features extracted: • Faces: The tests have been conducted on the ORL [13] and Yale-B [14] data sets, which are two of the most used benchmarks in this field. We extract two different feature vectors from each face image, in order to study the dependence of the BioHashing method on the features extracted. We call DCT the data set obtained extracting the n = 100 DCT coefficients with higher variance [15] and KL the data set obtained extracting the n=100 first dimensions obtained by the Karhunen–Loéve transform [16]. For the verification task based on sole biometric data (denoted as BIO) we adopt a 1-nearest neighbour classifier [16]. To minimize the possible misleading results caused by the training data, the classification results have been averaged over five experiments, all conducted using the same parameters. For each experiment we randomly resampled the learning and the test sets (containing, respectively, half of the patterns), maintaining the distribution of the patterns in the classes. ◦ ORL: It consists of 400 different images related to 40 individuals. ◦ Yale-B: It contains 5760 single light source images of 10 subjects each seen under 576 viewing conditions (9 poses × 64 illumination conditions). For every subject in a particular pose, an image with ambient (background) illumination was also captured. We use only the frontal poses (108 images for each individual). In particular, we use only the images with the light source direction between: −35◦ and 35◦ azimuth; −20◦ and 20◦ elevation. • Signatures: The 100 signers of the SUBCORPUS-100 MCYT database [17] have been used for the experiments (100 signers with 25 genuine signatures and 25 skilled forgeries per signer—forgers are provided the signature images of the clients to be forged and, after training with

A. Lumini, L. Nanni / Pattern Recognition 40 (2007) 1057 – 1065

Repeat k times: Random generation ⎡ r1,1 ⎤ ⎡ rm,1 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ r1, 2 ⎥ .... ⎢ rm, 2 ⎥ ⎢ ... ⎥ ⎢ ... ⎥ ⎢ ⎥ ⎢ ⎥ r ⎣⎢ 1, n ⎦⎥ ⎣⎢ rm, n ⎦⎥ k sets of m linearly independent random vectors ri ∈ℜ n i=1..m

Repeat k times: Gram-Schmidt ortho-normalization ⎡ or1,1 ⎤ ⎡ orm,1 ⎤ ⎢ ⎥ ⎢ ⎥ ⎢or1, 2 ⎥ .... ⎢ orm, 2 ⎥ ⎢ ... ⎥ ⎢ ... ⎥ ⎢ ⎥ ⎢ ⎥ or ⎣⎢ 1,n ⎦⎥ ⎣⎢orm, n ⎦⎥ k orthonormal sets of vectors ori ∈ℜ n i=1..m

Calculate q permutations of x

Inner prod uct calculation and thresholding for p values of τ

Data Normalization

Normalized feature vector x ∈ℜ n )

K

⎡ x1 ⎤ Feature vector ⎢ ⎥ x ∈ℜ n ⎢ x2 ⎥ ⎢ ... ⎥ ⎢ ⎥ ⎣ xn ⎦

NORMALIZATION τ VARIATION SPACES A UGMENTATION F EATURES P ERMUTATION

)

Hash key

1061

IMPROVED BIOHASHING k . q . p bit vectors m

b i ∈{0,1}

i = 1 ... k . q . p

Fig. 5. The improved BioHashing method.

them several times, they are asked to imitate the shape with natural dynamics, i.e., without breaks or slowdowns). Signature corpus is divided into the training and the test sets. In the case of considering skilled forgeries (SKILLED), the training set comprises the first 5 genuine signatures and the test set consists of the remaining samples (i.e., 100 ∗ 5 client, respectively, and 100 ∗ 25 impostor similarity test scores). In case of considering random forgeries (RANDOM ) (i.e., impostors are claiming others’ identities using their own signatures), client similarity scores are as above and we use one signature of every other user as impostor data, so the number of impostor similarity scores is 100 ∗ 99. We use a feature vector consists of n = 100 features extracted as proposed in Refs. [18,19]. For the verification task based on sole biometric data we adopt a Parzen windows classifier, as proposed in Ref. [19]. • Fingerprints: The four fingerprint databases (Db1, Db2, Db3 and Db4) provided in FVC 2002 and publicly available in DVD included in Ref. [20] have been used for the experiments. We adopt the procedure named FingerCode [21] for feature extraction algorithm, which requires to determine a reference point and region of interest for the fingerprint image, to tessellate the region of interest around the reference point, to filter the region of interest in eight different directions using a bank of Gabor filters and to compute the average absolute deviation from the mean of grey values in individual sectors in filtered images to define the feature vector. The result is a feature vector of a fixed size n = 640 which can be compared by the Euclidean distance. For the verification task based

on sole biometric data we adopt the method proposed in Ref. [22]. The reason of adopting an image-based approach instead of a minutiae-based is related to our requirement of a fixed length feature vector to calculate the BioHash code. In our verification stage, the comparison of two fingerprints must be based on the same core point. However, the comparison can only be done if both fingerprint images contain their respective core points, but 2 out of 8 impressions for each finger in FVC2002 [20] have an exaggerate displacement. In our experiments, as in Ref. [22], these two impressions were excluded, and hence, there are only 6 impressions per finger yielding 600 fingerprint images in total for each database. For the performance evaluation we adopt the equal error rate (EER) [16]. ERR is the error rate where the frequency of fraudulent accesses (FAR) and the frequency of rejections of people who should be correctly verified (FRR) assume the same value; it can be adopted as a unique measure for characterizing the security level of a biometric system. The tests reported in Tables 1–5 are aimed to compare the improved BioHashing method with the base BioHashing (BASE) and the simple verification method based on sole biometric data (BIO). We test three configurations of the improved method where the NORMALIZATION is always present: with “VAR” we denote the simple  VARIATION solution described in Section 4, with “SPAUG” and with “FEATPERM” the other two solutions of Section 4 coupled with  VARIATION. Since the last two solutions are quite similar each to other and do not produce sensible variations of performance we did not test them together. The last

1062

A. Lumini, L. Nanni / Pattern Recognition 40 (2007) 1057 – 1065

Table 1 EER obtained on the face data sets using the following parameters: m = 100, max = 0.1, min = −0.1, p = 5, k = 5, q = 5 (WORST hypothesis)

Table 5 EER obtained of the fingerprint data sets using the following parameters: m = 100, max = 0.1, min = −0.1, p = 5, k = 5, q = 5 (WORST hypothesis)

Face verification (WORST) Data sets

Fingerprint verification (WORST)

Data sets

Methods

ORL-KL ORL-DCT YALE-KL YALE-DCT

Methods

DB1

DB2

DB3

DB4

BIO BASE VAR SPAUG FEATPERM BIO + SPAUG

3 14 6.5 4.3 4.2 2.4

BIO BASE VAR SPAUG FEATPERM BIO + SPAUG

5.5 15 13 11 11 7

5.2 15 12 10 10 6.8

18.3 27 27 25 25.1 22

8.4 20 15.5 14 14.4 9.1

2.8 15 6 4.9 5 3.5

3.2 16 11 10 10 4.5

3.8 26 14 12.4 12.7 6

Table 2 EER obtained of the signature data set (considering skilled and random forgeries) using the following parameters: m=100, max =0.1, min =−0.1, p = 5, k = 5, q = 5 (BEST hypothesis) Signature verification (BEST)

Test modalities

Methods

RANDOM

SKILLED

BIO BASE VAR SPAUG FEATPERM BIO + SPAUG

3 0.5 0.13 0 0 0.5

8.6 0.4 0.1 0 0 1.2

Table 3 EER obtained of the signature data set (considering skilled and random forgeries) using the following parameters: m=100, max =0.1, min =−0.1, p = 5, k = 5, q = 5 (WORST hypothesis) Signature verification (WORST)

Test modalities

Methods

RANDOM

SKILLED

BIO BASE VAR SPAUG FEATPERM BIO + SPAUG

3 2.5 2 1.7 1.8 1.5

8.6 14.2 12.2 12 12.7 7.2

Table 4 EER obtained of the fingerprint data sets using the following parameters: m = 100, max = 0.1, min = −0.1, p = 5, k = 5, q = 5 (BEST hypothesis) Fingerprint verification (BEST)

Data sets

Methods

DB1

DB2

DB3

DB4

BIO BASE VAR SPAUG FEATPERM BIO + SPAUG

5.5 1 0.9 0.5 0.5 1

5.2 0.8 0.6 0.4 0.4 1

18.3 5.4 4.6 3 3 5

8.4 1.4 1.4 0.8 1 1.5

configuration we tested (BIO + SPAUG) is a combination by the SUM rule of the method-based sole on the biometric features and our best improved BioHashing. We perform experiments both in the best hypothesis (BEST), when never an impostor steals the Hash key, and in the worst (WORST) and very unlikely hypothesis that always (in each match) an impostor steals the Hash key. Results are not reported for the best hypothesis on the face data sets since the EER of all the BioHashing methods (thus excluding BIO) for these data sets was always 0. The following tables are referred to the face (Table 1), signature (Tables 2 and 3) and fingerprint (Tables 4 and 5) verification problems, respectively. Our improved BioHashing approach (in all the proposed variants, and in particular in the best configuration SPAUG) dramatically improves the performance of the base method in the worst case (when always an impostor steals the Hash key). Moreover, the fusion of the non-hashing verification approach and our SPAUG method (BIO + SPAUG) allows the performance of the SPAUG to be further improved: in such a case the performance is only slightly lower than BIO in the worst case, with the advantage of an amazing zero EER in the more likely hypothesis of non-stealing of the Hash key. In order to confirm the benefit of our method the DET curve has been also considered. The DET curve [23] is a two-dimensional measure of classification performance that plots the probability of false acceptation against the rate of false rejection. In Fig. 6 the DET curve obtained by BASE, by SPAUG, by BIO, and by BIO + SPAUG methods on the FACE data sets is plotted. The graph shows that the BIO + SPAUG approach has a behaviour very similar to the BIO approach also in the worst case of always stealing of the Hash key. SPAUG is the only method that reaches the zero EER on the signature data set in the best hypothesis; differently from the base BioHashing approach our improved method SPAUG+BIO outperforms the simple verification method based on biometric data (BIO) also in the worst test case when always the Hash key is stolen. Please note that an EER of 7.2 for the method SPAUG+BIO in the worst hypothesis and in the skilled test modalities is a very good result considering that it has been achieved in the most unpleasant case when an impostor steals the key and tries to imitate the signature.

A. Lumini, L. Nanni / Pattern Recognition 40 (2007) 1057 – 1065

1063

Fig. 6. The DET curve: (a) ORL-DCT, (b) YALE-DCT (both in the WORST hypothesis) obtained by BASE (black line), by SPAUG (red line), by BIO (green line), and by BIO + SPAUG (blue line).

Table 6 EER obtained of the fingerprint data sets using the following parameters: m = 100, max = 0.06, min = 0, p = 5, k = 5, q = 5 (BEST hypothesis)

60 50 40

Fingerprint verification (BEST)

Data sets

Methods

DB1

DB2

DB3

DB4

BIO BASE VAR SPAUG FEATPERM BIO + SPAUG

5.5 1.5 0.6 0.4 0.4 1

5.2 1 0.5 0.2 0.2 1

18.3 6 4 2.2 2.4 3

8.4 2 1 0.5 1 1

30 20 10

-0.10 -0.09 -0.08 -0.07 -0.06 -0.05 -0.04 -0.03 -0.02 -0.01 0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10

0

τ

Fig. 7. EER obtained by the base BioHashing method varying the parameter : the test problem is fingerprint verification on the DB2 data set (WORST hypothesis).

The performance improvements obtained on the fingerprint data sets by SPAUG and methods BIO + SPAUG with respect to BIO and BASE are lower than those on the face data sets. Our error analysis revealed that the distribution of the biometric features extracted from the fingerprint data sets is such that the range of variation of the parameter  requires a more accurate tuning. In Fig. 7 the performance of the BASE BioHashing approach by changing  are reported. We argued that shifting the range of variation of  the performance should improve. The following Tables 6 and 7 report the EER obtained on the fingerprint data sets by our SPAUG method using the following parameter setting: m = 100, max = 0.06, min = 0, p = 5, k = 5, q = 5. These results show a slight improvement of the performance for the fusion method (BIO + SPAUG) which prove to be quite stable with respect to parameter variations, while

Table 7 EER obtained of the fingerprint data sets using the following parameters: m = 100, max = 0.06, min = 0, p = 5, k = 5, q = 5 (WORST hypothesis) Fingerprint verification (WORST)

Data sets

Methods

DB1

DB2

DB3

DB4

BIO BASE VAR SPAUG FEATPERM BIO + SPAUG

5.5 15 10 8 8.2 7

5.2 15.5 8.9 7.5 8 6.5

18.3 28 24 21 21.5 19

8.4 17 15 14 15 8

a considerable improvement can be noted on the performance of our improved BioHashing methods (VAR, SPAUG, FEATPERM) even if the tuning of max and min is performed considering only the DB2 data set. In conclusion, our experiments show that: • Our improved BioHashing approaches (in all the proposed variants, and in particular in the best configuration SPAUG) dramatically improve the performance of the base

1064

A. Lumini, L. Nanni / Pattern Recognition 40 (2007) 1057 – 1065

7.0

7.0

7.0

6.5

6.5

6.5

6.0

6.0

6.0

5.5

5.5

5.5

5.0

5.0

5.0

4.5

4.5

4.5

4.0

4.0 3

5

10

15

4.0 3

k

5

10

15

p

0.05

0.1 0.15 τmax

0.2

Fig. 8. EER obtained by the base BioHashing method varying the parameters k, p, max (the other parameters have been fixed to our best values m = 100, max = 0.1, min = −max , p = 5, k = 5). The test problem is face verification on the ORL-DCT data set in the WORST hypothesis.

BioHashing method, mainly in the worst case that always an impostor steals the Hash key. • The fusion of the non-hashing verification approach and our SPAUG method (BIO + SPAUG) can be a good compromise: it reaches a nearly 0 EER in the best and most probable hypothesis of no-stealing of Hash keys and it improve the performance of a pure “BioHashing method” in the worst hypothesis of key stealing. Other experiments have been conducted in order to evaluate the dependence of the methods on the parameters. In particular, fixed m to the maximum allowed value (m=n) we evaluate the performance of our best BioHashing approach (SPAUG) as a function of k, p and max (fixing min =−max ). The results, reported in Fig. 8, confirm that the performance tents to improve with k and p, however our choice (k = 5 and p = 5) represents a good trade-off between computational time and performance; furthermore, we selected the value of max which grants the lower EER. Finally the stability of the SPAUG approach has been evaluated by means of the Q-statistics, by evaluating the average Q-statistic among K = 15 classifiers obtained with the same parameter setting but a different Hash key. The comparison between this result of 0.98 to that obtained by the test carried out for the base BioHashing (0.8, see Section 3) confirms the higher stability of our approach.

6. Conclusions Biometrics has the great advantage of basing recognition on an intrinsic aspect of a human being and thus requiring the person to be authenticated to be physically present. Unfortunately, biometrics also suffers from some inherent limitation: high false rejection of valid users, when the system works at a low false acceptation rate. We have proposed a modified BioHashing approach based on the several solutions for augmenting the length of the hash code, which gains performance improvement also in the worst case when

always an impostor steals the Hash key. Moreover, we have shown that the fusion between our improved BioHashing and a method trained using “only” the biometric data allows the performance to be further improved. As a future work we plan to study some solutions to adapt the BioHashing approach to other biometric characteristics when the features extracted belong to an intrinsically highdimensional space (e.g. iris). In such a case the pseudorandom vector generation and BioHash code calculation can be very time consuming and thus unacceptable for a realtime application. The approach named FEATPERM can be a feasible solution for the first problem, the second problem will be object of further investigation. Acknowledgements This work has been supported by Italian PRIN prot. 2004098034 and by European Commission IST-2002507634 Biosecure NoE projects. The authors would like to thank J. Fierrez-Aguilar and J. Ortega-Garcia for sharing SUBCORPUS-100 MCYT data set. References [1] A.T.B. Jin, D.N.C. Ling, A. Goh, Biohashing: two factor authentication featuring fingerprint data and tokenised random number, Pattern Recognition 37 (11) (2004) 2245–2255. [2] A.T.B. Jin, D.N.C. Ling, Cancelable biometrics featuring with tokenised random number, Pattern Recognition Lett. 26 (10) (2005) 1454–1460. [3] A.T.B. Jin, D.N.C. Ling, A. Goh, Personalised cryptographic key generation based on faceHashing, Comput. Secur. J. 23 (7) (2004) 606–614. [4] T. Connie, A. Teoh, M. Goh, D. Ngo, PalmHashing: a novel approach for dual factor authentication, Pattern Anal. Appl. 7 (3) (2004) 255–268. [5] Y.H. Pang, A.T.B. Jin, D.N.C. Ling, Cancelable palmprint authentication system, Int. J. Signal Process. 1 (2) (2005) 98–104. [6] D. Maio, L. Nanni, MultiHashing, human authentication featuring biometrics data and tokenised random number: a case study FVC2004, Neurocomputing (2006), to appear.

A. Lumini, L. Nanni / Pattern Recognition 40 (2007) 1057 – 1065 [7] B. Kong, K. Cheung, D. Zhang, M. Kamel, J. You, An analysis of Biohashing and its variants, Pattern Recognition, to appear. [8] L. Blum, M. Blum, M. Shub, A simple unpredictable pseudo-random number generator, SIAM J. Comput. ( 15) (1986) 364–383. [9] G. Bologna, R.D. Appel, A comparison study on protein fold recognition, in: Proceedings of the Ninth International Conference on Neural Information Processing, Singapore, 2002, pp. 2492–2496. [10] L. Nanni, A. Lumini, Ensemble of Parzen window classifiers for online signature verification, NeuroComputing 68 (6) (2005) 217–224. [11] L.I. Kuncheva, C.J. Whitaker, Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy, Mach. Learn. 51 (2003) 181–207. [12] L. Nanni, Comparison among feature extraction methods for HIV-1 protease cleavage site prediction, Pattern Recognition 39(4) (2006) 711–713. [13] http://www.uk.research.att.com/facedatabase.html. [14] P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman, Eigenfaces vs. fisherfaces: recognition using class specific linear projection, IEEE Trans. Pattern Anal. Mach. Intell. (1997) 711–720. [15] Z. Pan, A. Rust, H. Bolouri, Image redundancy reduction for neural network classification using discrete cosine transforms. International Joint Conference on Neural Networks, vol. 3, Como, Italy, 24–27 July 2000, pp. 149–154.

1065

[16] R.O. Duda, P.E. Hart, G. Stork, Pattern Classification, second ed., Wiley, New York, 2000. [17] J. Ortega-Garcia, J. Fierrez-Aguilar, D. Simon, et al., MCYT baseline corpus: a bimodal biometric database, IEE Proc. Vision Image Sensor Process. 150 (6) (2003) 395–401. [18] J. Fierrez-Aguilar, L. Nanni, J. Lopez-Penalba, J. Ortega-Garcia, D. Maltoni, An on-line signature verification system based on fusion of local and global information, audio- and video-based biometric person authentication AVBPA 2005, New York, USA, July 20–22, 2005. [19] L. Nanni, A. Lumini, Ensemble of Parzen window classifiers for on-line signature verification, Neurocomputing 68 (2005) 217–224. [20] D. Maltoni, D. Maio, A. Jain, S. Prabhakar, Handbook of Fingerprint Recognition, Springer, New York, 2003. [21] A.K. Jain, S. Prabhakar, L. Hong, S. Pankanti, Filterbank-based fingerprint matching, IEEE Trans. Image Process. 5 (9) (2000) 846–859. [22] L. Nanni, A. Lumini, A novel method for fingerprint verification that approaches the problem as a two-class pattern recognition problem, NeuroComputing (2006), to appear. [23] A. Martin, et al., The DET curve in assessment of decision task performance, in: Proceedings of EuroSpeech, 1997, pp. 1895–1898.