Accurate and Fast Iris Segmentation - CiteSeerX

7 downloads 128 Views 682KB Size Report
However, this was only documented by James Doggarts in 1949. ... They commissioned John Daugman to develop the fundamental algorithms. In ..... method to find circles in an image, it would be prohibitive to perform Hough transform in the ...
G. AnnaPoorani et al. / International Journal of Engineering Science and Technology Vol. 2(6), 2010, 1492-1499

Accurate and Fast Iris Segmentation G. ANNAPOORANI, R. KRISHNAMOORTHI, P. GIFTY JEYA, S. PETCHIAMMAL@SUDHA Image Research and Information Science Laboratory, Department of Computer Science and Engineering, Bharathidasan Institute of Technology, Anna University Tiruchirappalli, India. Abstract A novel iris segmentation approach for noisy iris is proposed in this paper. The proposed approach comprises of specular reflection removal, pupil localization, iris localization and eyelid localization. Reflection map computation is devised to get the reflection ROI of eye image using adaptive threshold technique. Bilinear interpolation is used to fill these reflection points in the eye image. Variant of edge-based segmentation technique is adopted to detect the pupil boundary from the eye image. Gradient based heuristic approach is devised to detect the iris boundary from the eye image. Eyelid localization is designed to detect the eyelids using the edge detection and curve fitting. Feature sequence combined into spatial domain segments the iris texture patterns properly. Empirical results show that the proposed approach is effective and suitable to deal with the noisy eye image for iris segmentation. Keywords: Iris segmentation, Specular reflection removal, Pupil localization, Iris localization, Eyelid localization. 1. Introduction Biometric has been used to identify a human based on the different patterns owned by that individual. The uniqueness of the pattern is the advantage of biometric, since the pattern is almost impossible to be imitated by others. The main concern in biometric is finding a pattern to uniquely identify individuals. Another concern is that the pattern must not be easily changed by natural causes, e.g. temperature, climate, age, etc. Some biometric patterns that have been currently used are finger print [1], palm print [2], and iris [3-5]. The advantage of iris biometric is that the iris is protected by cornea and aqueous humor. Iris segmentation is not a simple issue in real life conditions since iris is partially occluded by eyelids and eyelashes, specular reflections, pupil dilation, low contrast, unfocused images, etc. are some common issues one must face during iris segmentation, these problems need to be solved before mass-scale deployment, furthermore process must remain low enough so it can be used as a practical method for recognition. This paper addresses some of these problems presenting a fast and robust approach to segment the iris region using suitable and low computational cost techniques to achieve effective iris segmentation. 1.1 Related Works In 1936, an ophthalmologist, Frank Burch proposed the idea of using iris patterns for personal identification. However, this was only documented by James Doggarts in 1949. In 1968, Kronfeld [3] had provided evidence of the particular characteristics of the iris. Although the general structure of the iris is not genetically determined, the uniqueness of its minutiae is highly dependent on circumstances. As a result, replication is almost impossible. It has also been noted that, following adolescence, the iris remains stable and varies little for the remainder of the person’s life. In 1987, ophthalmologists Flom and Safir [4] obtained a patent for an unimplemented conceptual design of an automated iris biometrics system. They commissioned John Daugman to develop the fundamental algorithms. In 1990, another anatomist Davson [5], during the course of clinical observations, has noted that the irises of individuals are highly distinctive. This extends to the left and right eye of the same person. From the viewpoint of feature extraction, we can observe that there are six main approaches to iris representation: phase-based methods [69], texture analysis [10-16], zero-crossing representation [17-20], intensity variation analysis [21-22], fractal dimension analysis [23] and neural network [24]. Daugman [6-9] has proposed an integro-differential operator for locating the circular iris and pupil regions, and also the arcs of the upper and lower eyelids. It employed the use of 2-D Gabor wavelets to extract phase structure information of the iris to generate a 2048-bit iris code and compared the difference between a pair of iris representations by computing the Hamming distance. It concentrated on ensuring that repeated image captures

  ISSN: 0975-5462

1492

G. AnnaPoorani et al. / International Journal of Engineering Science and Technology Vol. 2(6), 2010, 1492-1499 produced irises on the same location within the image, had the same resolution, and were glare-free under fixed illumination. Iris pattern recognition algorithms were patented by Daugman in 1994 and now form the basis for all current commercial iris recognition systems and are owned by Iridian Technologies. Wildes [10] has adopted a two-step method to localize iris: edge map detection followed by edge-based segmentation. This iris segmentation approach is based on the fact that the pupil is typically darker than the iris and the iris is darker than the sclera. It employed normalized correlation and Laplacian of Gaussian filters at different resolutions for pattern matching. Yong Zhu et al. [11] adopted threshold technique for pupil boundary and virtual concentric circle concept for iris boundary. They devised an algorithm to extract global texture features using multichannel Gabor filtering and the wavelet transform with Weighted Euclidean Distance classifier for matching. Lim et al. [12] used bisection method for finding pupil center point, and virtual circle for finding pupil boundary and iris boundary. They reported the use of Haar Wavelet transform to extract features from iris images and a competitive learning method such as a weight vector initialization and the winner selection for classification. They compared the use of Gabor transform and Haar wavelet transform, and showed that the recognition rate of Haar wavelet transform is slightly better than Gabor transform. To reduce the region for edge detection, Ma et al. [13] approximated the pupil position before edge detection. They have proposed a texture analysis method with multichannel Gabor filtering to capture both global and local details in iris. In [14], Bhola Ram Meena used high thresholding technique for pupil detection and major intensity change for iris detection. She has developed five different algorithms for iris recognition based on circular and radial features, Fourier transform along radial direction, circular-mellin filters, corner detection and local binary patterns. She has also reported the use of the fusion of circular-mellin and corner detection algorithm to extract features for iris pattern recognition. Vatsa et al. [15] have used thresholding and Freeman’s chain code algorithm for detecting pupil and also used linear contrast filter for detecting iris. They have used 1-D log polar Gabor wavelet and Euler numbers to extract texture features for iris pattern recognition. Nabti et al. [16] employed multiscale edge based segmentation. They have proposed iris recognition which is based on a combination of some multiscale feature extraction techniques. They used special Gabor filters and Wavelet maxima components. Finally, a promising feature vector representation using moment invariants is proposed. Boles and Boashash [17] extracted a set of one dimensional signal from the iris image using the intensity values on a set of circular contours centered at the pupil center using edge detection techniques. A set of one dimensional signal is further encoded by using a zero crossing transformation at different resolution levels for iris recognition. SanchezAvila and Sanchez-Reillo [18-19] have adopted histogram and integro-differential operator for iris segmentation. In [18], they have represented the feature of the iris by a multiscale analysis of the corresponding discrete dyadic wavelet transform zero-crossing representation. In [19], they have used Gabor filters for iris code generation and Hamming distance for matching. In addition, they have also worked with zero-crossing representation of the dyadic wavelet transform applied to two different iris signatures: one based on a single virtual circle of the iris; the other based on an annular region. They used different measures such as Euclidean distance and Hamming distance for matching. Donald M. Monro et al. [20] used edge based segmentation for pupillary detection. They also have scanned the horizontal line through the pupil center and found jumps in gray level on either side of the pupil for iris boundary detection. They have developed an iris feature extraction method based on discrete cosine transform (DCT). They applied the DCT to overlapping rectangular image patches rotated 45 degrees from the radial axis. The differences between the DCT coefficients of adjacent patch vectors are then calculated and a binary code is generated from their crossings. In order to increase the speed of the matching, three most discriminating binarized DCT coefficients are kept, and the remaining coefficients are discarded. To reduce computational complexity, Ya-Ping Huang et. al. [21] segmented the iris boundary in a rescaled image and the pupil boundary from iris region. They have adopted Independent Component analysis to extract iris pattern and competitive learning mechanism for classification. To reduce the region for edge detection and search space, Ma et al. [22] approximated the iris position before edge detection. They have developed iris recognition system by characterizing local sharp variations. They constructed a one-dimensional intensity signal and used particular class of wavelet with vector of position sequence of local sharp variations points as features. In [23], Wen-Shiung Chen et al. have segmented proper red-component image with pure iris pattern information from captured color image. They have extracted iris features using fractal dimension analysis. Lye Liam et al. [24] have proposed the use of a thresholding technique that converts the initial captured grayscale image to binary. The threshold is calculated in order to join the pupil and iris together in a dark region. Based on the assumptions that both components have

  ISSN: 0975-5462

1493

G. AnnaPoorani et al. / International Journal of Engineering Science and Technology Vol. 2(6), 2010, 1492-1499 circular form (iris and pupil), a ring mask is created and are applied on the whole image to search for the iris/sclera border with Self-Organization neural network based iris pattern recognition. Ritter et al. [25] used active contour models for localizing the pupil-iris border in eye images. Theodore A. Camus and Richard Wildes [26] proposed a multi-resolution approach to detect the boundaries of an eye in a close-up image. Libor Maesk [27] has proposed a method that begins with the binary edge image map construction, using the Peter Kovesi edge detector. Proenca et al. [30] have proposed another segmentation methodology that produces clustered image which is used for the edge-map construction process. Xiaofu He et al. [31] have adopted a three-step method to localize iris: geometrical method to locate pupil, edge map detection and edge-based segmentation to find the limbic boundary. Jiali Cui et al. [28] proposed an iris localization algorithm based on the information of low frequency of wavelet transform for pupil segmentation and localized the iris with a differential integral operator. Bachoo and Tapamo [29] have proposed the eyelash detection using the gray-level co-occurrence matrix [GLCM] pattern analysis technique. The GLCM is computed for (21 x 21) windows of the image using the most significant 64 grey levels. A fuzzy C-means algorithm is used to cluster windows into from 2 to 5 types (skin, eyelash, sclera, pupil, and iris) based on features of the GLCM. Daugman [32] have proposed active contour model based on Fourier series expansion for iris segmentation. He et al. [33] have proposed Adaboost-cascade Learning and Pulling & Pushing method for iris segmentation. 1.2 Outline This paper is organized as follows: In section 2, we present the proposed iris segmentation technique. In section 3, we present the empirical results of our design strategy. In last section, we present conclusion and future works. 2. Iris Segmentation This section presents the proposed iris segmentation method. The steps involved in the proposed iris segmentation are: (i) reflection removal using bilinear interpolation (ii) pupil center and boundary localization by using edge detection and distance measurements (iii) iris boundary localization by selecting the largest gradient change to the left and right boundaries of the pupil and (iv) Eyelid localization by adopting rank filter and histogram filter. 2.1 Specular Reflection Removal In order to get proper illumination, most iris cameras use infrared illuminators, which introduce specular reflections in the iris images, as shown in Fig 2. These specular reflections will inevitably destroy the structure of the iris. To address this problem, a novel reflection removal method is proposed.

(a)

(b)

(c)

Fig. 2. (a)-(c) Three sample iris images with different specular reflections.

Most iris databases have specular reflection in the image. To perform robust iris segmentation, it is important to mitigate the effect of these reflections present on the segmentation process. The reflections will be more harmful if they present on the pupil area and iris region.

(a)

(b)

(c)

Fig. 3. Specular reflection removal (a) An original iris image occluded by reflections. (b) The reflection map and the envelop points respectively. (c) The results of reflection removal.

In order to reduce reflection noise the original image I(x,y) is divided into 8 x 8 blocks . The average of each block is computed. Then the average of first five highest averages of the image is taken as adaptive threshold Tref. The original image I(x,y) is binarized using Tref to calculate a binary “reflection” map R(x,y) of I(x,y), as shown in Fig. 3b. In

  ISSN: 0975-5462

1494

G. AnnaPoorani et al. / International Journal of Engineering Science and Technology Vol. 2(6), 2010, 1492-1499 order to interpolate the reflection point P0(x0,y0), four envelop points {Pl, Pr, Pt, Pd } are defined. These envelop points are the surrounding pixels of the reflected area. In the R(x,y), algorithm iterates through the image until it finds a white pixel (x,y), then it establishes four envelop points defined as: t

P = (x, y-3) l

P = (x-3, y) r

P = (x+3, y) d

P = (x, y+3) 0

For each row, this method interpolates the area formed by the envelop points, the interpolation pixel I(P ) is defined as follows:

I (P 0 ) =

I ( P l )( x r  xl )  I ( P r )( x0  xl ) 2( x r  xl )



I ( P t )( yd  y0 )  I ( P d )( y0  yt ) 2( yd  yt )

The interpolation pixel found is applied on the original image to fill the reflection points as shown in the Fig. 3c. 2.2 Pupil boundary localization The iris is the annular part between the pupil (inner boundary) and the sclera (outer boundary), both can be taken approximately as non-concentric circles, pupil finding is an important step because the iris is localized based on the pupil. The pupil has an important feature, it is darker than the rest of the eye, even in brown or dark eyes. Based on this heuristic a simple but effective method to find the pupil is used. In order to perform edge detection on the image canny edge detector is used due to the fact that it gives excellent results. It has very low error rate and there is almost zero response to non-edges when giving an appropriate threshold. This algorithm uses horizontal and vertical gradients in order to deduce edges in the image. After running the canny edge detection on the image a circle is clearly present along the pupil boundary. Fig. 4a shows the canny edge detected image. Next euclidean distance is computed from any non-zero point to the nearest zero valued point an overall spectrum can be found. This spectrum shows the largest filled circle that can be formed within a set of pixels. Since the pupil is the largest filled circle in the image the overall intensity of this spectrum will peak in it. Fig. 4b shows the Euclidean distance computed image. In the pupil circle the exact center will have the highest value. This is due to the simple fact that the center is the one point inside the circle that is farthest from the edges of the circle. Thus the maximum value must correspond to the pupil center, and furthermore the value at that maximum (distance from that point to nearest non-zero) must be equal to the pupil radius. With the center and radius Hough transform is applied. Hough transform is a high cost method to find circles in an image, it would be prohibitive to perform Hough transform in the entire image, pupil center and radius are provided to Hough transform to dramatically reduce the computational cost. From the center a region of 5 x 5 pixels is established where Hough transform will look for circle center. This is done because pupil is not always a perfect circle and sometimes the estimated center is not the exact center. The last thing to do is to draw the boundary based on the center and radius found in Hough transform, the drawing is done in the original image using midpoint circle drawing technique. Fig. 4c shows the pupil boundary localization.

(a)

(b)

(c)

Fig. 4. Pupil boundary and center localization (a) Image after Canny edge detection. (b) Image after computing minimal euclidean distance to non-white pixel. (c) The computed pupil center and boundary are highlighted.

2.3 Iris boundary localization The next step in the process is to segment the iris outer boundary, this is a more complicated task than the pupil segmentation. Since the top and bottom part of the iris are often covered by eyelashes, the boundary of the iris is sought after along the horizontal line starting from the pupil iris boundary. Starting from the pupil center (c1, c2) two regions to look for jumps in gray scale level are defined as: Wr = W1,W2 Wl = W1,W2

  ISSN: 0975-5462

1495

G. AnnaPoorani et al. / International Journal of Engineering Science and Technology Vol. 2(6), 2010, 1492-1499 Wr and Wl are rectangle based on the two coordinates in the original image. As shown in fig. 5, the region represents the pupil-iris and iris-sclera gradient change [35] but iris pattern introduces some noise to confuse the gradients location.

Fig. 5 is the original image with the projected right region (in white), as we can see Zone A represents pupil area with a dramatic change of gradient (pupil – iris gradient), Zone B represents the clearer pixels in the iris pattern Zone C represents the gradient change between iris and sclera followed by much whiter pixels of the sclera in Zone D.

(a)

(b)

Fig. 6. Iris boundary localization (a) sample CASIA image (b) iris and pupil boundary

Based on the heuristic that there will be only two important gradients in the region (pupil – iris, iris – sclera) and pupil pixels will be the darkest, iris pixels will be intermediate and sclera pixels will be whiter, this way we can look for the second gradient and take it as iris estimated radius. The left and right boundaries of the iris are found by selecting the largest gradient change to the left and right of the pupil. With an iris radius estimation Hough transform is performed inside the pupillary area in order to reduce the number of pixels which are allowed to vote for iris boundary. Fig 6b shows the iris boundary localization. 2.4 Eyelash removal and Eyelid localization An even harder problem involved in iris segmentation is to locate the upper and lower eyelids. The shape of eyelids is so irregular that it is impossible to fit them with simple shape assumptions. In addition, the upper eyelid tends to be partially covered with eyelashes, making the localization more difficult. Fortunately, these problems can be solved by a 1D rank filter and a histogram filter. The 1D rank filter removes the eyelashes, while the histogram filter addresses the shape irregularity. The steps involved in the proposed eyelid localization method is depicted in Fig. 7.

Fig. 7. The steps involved in eyelid localization. IROI is filtered with a 1D horizontal rank filter. With the observation that the eyelashes are mostly vertical thin and dark lines, IROI is horizontally filtered with a 1D rank filter. A rank (or order-statistic) filter is a nonlinear filter whose response is based on ranking (ordering) the pixels contained in the image area encompassed by the filter. For example, the rank-p filter will replace the value of a pixel by the pth gray level in the neighborhood of that pixel encompassed by the rank filter. After horizontal rank filtering, most eyelashes will be weakened or even eliminated in the resultant image (Iranked) depending on their width, leading to a clearer eyelid boundary. A raw eyelid edge map is calculated. Edge detection is then performed on the upper region of Iranked along the vertical direction, resulting in a raw eyelid edge map Eraw. Only one edge point is reserved along each column so that most noisy edges are avoided. The eyelid is fitted with a parabola curve. The exact shape of the eyelid is obtained by parabolic curve fitting. A similar approach is repeated to locate the lower eyelid. 3. Experimental Results Experiments are carried out to evaluate the efficiency of the proposed method. Two iris image databases are adopted: CASIA v1.0 and CASIA-IrisV3-Lamp. All of these iris image databases are challenging. The CASIAIrisV3-Lamp iris image database is released by the Institute of Automation, Chinese Academy of Sciences. It contains 16,213 iris images from 819 eyes and all of the subjects are Chinese. It was collected in an indoor environment with illumination change and contains many poor images with heavy occlusion, poor contrast, pupil deformation, etc. To the best of our knowledge, this is the largest iris image database in the public domain. The

  ISSN: 0975-5462

1496

G. AnnaPoorani et al. / International Journal of Engineering Science and Technology Vol. 2(6), 2010, 1492-1499 images from CASIA-IrisV3-Lamp are 8-bit intensity images with a resolution of 640 x 480. The input images are treated for reflection removal as described in Section 2.1. These specular reflections will inevitably destroy the structure of the iris, especially when the user wears glasses. Hence specular reflections must be removed first so as to preserve the structure of iris. An interesting property of our method is its insensitivity to the binary threshold Tref. Even though some non reflection points (e.g., the bright eyelid region due to oversaturation) are detected, little or even no harm is done to the iris structure because their envelop points are also bright. This property allows us to choose a loose value for Tref. Specular reflection removal can be performed before pupil center estimation to improve the estimated center accuracy.

(a)

(b)

(c)

(f)

(d)

(e)

Fig. 8. (a) CASIA iris image with reflections (b) reflection removed image (c) pupil localization (d) iris boundary localization (e) curve fitting (f) segmented output

As shown in fig. 8b, manual evaluation shows that specular reflections can be successfully detected and properly filled. The iris structure is perfectly preserved even though several non reflection points are mistaken and filled as reflection points. The pupil boundary is localized as described in Section 2.2. (Fig. 8c). In previous work proposed by Wildes [10], pupil boundary is localized via edge detection followed by hough transform. After edge detection some extraneous data are found along with pupil circle as shown in Fig. 4a. In our proposed method that extraneous data found in the edge detection stage is decreased by dilating all the edge detected lines. Next, Euclidean distance is computed from any non-zero point to the nearest zero valued point an overall spectrum can be found. This spectrum shows the largest filled circle that can be formed within a set of pixels. In this work, Hough search space is reduced when performed on a small region. Due to the above fact, the computational cost is less. The iris boundary is localized by gradient based approach as described in Section 2.3. The left and right boundaries of the iris are found by selecting the largest gradient change to the left and right of the pupil. The output of the iris boundary localization is shown in Fig. 8d. The proposed method outperforms the canny edge detection followed by hough transform. This is because, hough transform needs an exhaustive search over a large N3 parameter space, which makes the procedure time consuming even if a coarse-to-fine strategy is applied. Besides Hough transform are circle model and therefore only the points that locate on a circle will be taken into account, which leads to local maxima of noncircular iris boundaries. In our proposed method, the gradient is computed with every edge point whether it locates on a circle or not, which ensures a more global solution and also the search space is reduced. Hough transform is performed inside the pupillary area in order to reduce the number of pixels which are allowed to vote for iris boundary. Iris-Sclera Boundary is the less successful segmented area. This is due to the extremely low contrast between iris and sclera specially in clear eyes. Even in this cases the algorithm performs very nicely segmenting a smaller area of the iris but still leaving enough information for feature extraction. Eyelash removal and eyelid segmentation is performed by adopting rank filter followed by parabolic curve fitting as described in Section 2.4. It is much harder issue and several considerations must be taken to obtain an acceptable segmentation, parabolic fitting is enough robust for some eyes (especially in older people).

(a)

(b)

(c)

Fig 9. Segmentation results by different algorithms (a) circle fitting and eyelid localization by Daugman integrodifferential operator (b) results of Wildes hough transform (c) results of proposed method.

  ISSN: 0975-5462

1497

G. AnnaPoorani et al. / International Journal of Engineering Science and Technology Vol. 2(6), 2010, 1492-1499 In terms of the accuracy of Eyelid localization, as shown in Figs. 9a, 9b, and 9c, we can observe that all three methods achieve almost the same localization result. One reason is that the example iris image has a relatively simple eyelid for localization. When dealing with more challenging eyelids, the proposed method outperforms Daugman’s integro-differential operator [9] and Wildes hough transform [10]. This is because, when localizing the eyelids, the integro-differential operator tends to be sensitive to local intensity changes, while the Hough transforms can be brittle to noisy edge points (e.g., those due to eyelashes). These drawbacks of these two methods often lead to inferior optima while localizing the eyelids. In contrast, under the proposed eyelid localization framework, the 1D rank filter removes most of the eyelash noise and the histogram filter deals with the shape irregularity very well, which together guarantee the accuracy of our eyelid localization results. By considering the parameters such as pupil boundary points, iris boundary points and eyelid boundary points, the region between those boundaries is displayed and the remaining region is masked. As shown in Fig 8f, the noise free iris is segmented from the eye image. 4. Conclusions In this paper an accurate and fast iris segmentation method for iris biometrics combining several techniques to achieve fast and robust segmentation is proposed. There are four major contributions. First, a novel reflection removal method to exclude specularities in the input images is developed. Four envelop points are calculated to bilinearly interpolate specular reflections, paving the way for efficient iris detection. Secondly, edge detection is performed using Canny edge detector. The euclidean distance from any non-zero point to the nearest zero valued point is computed. The pupil center is defined due to the fact that the exact center will have the highest value. Then hough transform is applied on the pupil region. Finally pupil boundary is localized by considering the center and radius found in hough transform. In the next step, iris is localized via selecting the largest gradient to the left and right of the pupil. Fourth, an efficient method to localize the eyelids is developed. The rank filter eliminates most of the eyelashes, while the histogram filter deals well with the shape irregularity. All this is done with extremely low computational cost achieving excellent results. References: [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21]

X. P. Luo, J. Jain, “Knowledge based fingerprint image enhancement”, Proc. International Conference on Pattern Recognition (ICPR), Barcelona, Spain, vol. 4, pp. 783-786, September 2000. D. Zhang, W. K. Kong, J. You and M.Wong, “Online palmprint identification”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, pp. 1041-1050, September 2003. P. Kronfeld, “The gross embryology of the Eye”, The Eye, Vol.1, pp.1-66, 1968. L. Flom and A. Safir, “Iris Recognition system”SSS, U.S.Patent: 4,641,349, 1987. H. Davson, “Davson’s Physiology of the Eye”, MacMillan, London, 1990. J. Daugman, “High Confidence Visual Recognition of persons by a test of statistical independence”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.15, No. 11, pp.1148-1161, 1993. J. Daugman, “Iris Recognition”, American Scientist Journal, Vol. 89, pp: 326 – 333, July-August 2001. J. Daugman, “Demodulation by complex-valued wavelets for stochastic pattern recognition,” International Journal on Wavelets, Multiresolution and Information Processing, Vol. 1, No. 1, pp. 1-17, 2003. J. Daugman, “How iris recognition works,” IEEE Transactions on Circuits and Systems for Video Technology, Vol. 14, No.1, pp. 21-30, 2004. Wildes, R.P., "Iris Recognition: An Emerging Biometric Technology", Proc. of the IEEE, Vol. 85, No.9, pp.1348-1363, 1997. Yong Zhu, Tieniu Tan, and Yunhong Wang, "Biometric Personal Identification Based on Iris Patterns", Proceedings of the 15th International Conference on Pattern Recognition, Vol. 2, pp.805 - 808, 2000. S. Lim, K. Lee, O. Byeon, and T. Kim, “Efficient Iris Recognition through Improvement of Feature Vector and Classifier”, Journal of Electronics and Telecommunication Research Institute, Vol. 23, No. 2, pp. 61 – 70, 2001. L. Ma, T. Tan, Y. Wang and D. Zhang, “Personal Identification based on Iris Texture Analysis”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 12, pp. 1519-1533, 2003. Bhola Ram Meena, “Personal Identification based on Iris Patterns”, Ph.D Thesis, Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur, 2004. M. Vatsa, R. Singh, and A. Noore, “Reducing the False Rejection Rate of Iris Recognition Using Textural and Topological Features”, International Journal of Signal Processing, Vol. 2, No. 2, pp. 66-72, 2005. Nabti M and Bouridane A, “An effective and fast iris recognition based on a combined multiscale feature extraction technique”, Proceedings of Pattern Recognition, Vol: 41, pp: 808 – 879, 2008. W. W. Boles and B. Boashash, “A Human Identification Technique Using Images of the Iris and Wavelet Transform”, IEEE Transactions on Signal Processing, Vol. 46, No. 4, pp. 1185-1188, 1998. C. Sanchez-Avila, and R. Sanchez-Reillo, “Multiscale Analysis for Iris Biometrics”, Proceedings of 36th International Conference on Security Technology, Vol. 1, pp 35-38, 2002. Sanchez-Avila, and Sanchez-Reillo R, “Two different approaches for iris recognition using Gabor filters and multiscale zero-crossing representation”, Pattern Recognition, Vol: 38, pp: 231-240, 2005. Donald M.Monro, Soumyadip Rakshit, and Dexin Zhang, “DCT-Based Iris Recognition”, IEEE Transactions on Pattern analysis and Machine Intelligence, Vol.29, No.4, pp.586-595, 2007. Ya-Ping Huang, Si-Weiluo, and En-Yi Chen, “An Efficient Iris Recognition system”, Proceedings of the First International conference on Machine Learning and Cybernatics, Beijing, pp. 450-454, November 2002.

  ISSN: 0975-5462

1498

G. AnnaPoorani et al. / International Journal of Engineering Science and Technology Vol. 2(6), 2010, 1492-1499 [22] L. Ma, T. Tan, Y. Wang and D. Zhang, “Efficient Iris Recognition by Characterizing Key Local Variations”, IEEE Transactions on Image Processing, Vol. 13, No. 6, pp: 739-750, 2004. [23] Wen-Shiung Chen, and Shang-Yuan Yuan, “A Novel Personal Biometric Authentication Technique using Human Iris Based on Fractal Dimension features”, Proceedings of ICASSP, Vol.3, pp. 201-204, 2003. [24] Lye Liam, Ali Chekima, Liau Fan, and Jamal Dargham, “Iris recognition using self-organizing neural network”, Proceedings of IEEE Student Conference on Research and Developing Systems’, pp.169-172, 2002. [25] N. Ritter, J. Cooper, R.Owens and P. P. Van Saarloos, “Location of the pupil-iris border in slit-lamp images of the cornea”, Proceedings of the International Conference on Image Analysis and Processing, 1999. [26] Theodore A. Camus and Richard Wildes, “Reliable and Fast Eye Finding in Close-up Images”, Proceedings of the IEEE International Conference on Pattern Recognition, 2002. [27] Libor Masek, “Recognition of human iris patterns for biometric identification”, http://www.csse.uwa.au/~pk/studentprojects/libor, November 2003. [28] Jiali Cui, Yunhong Wang, Tieniu Tan, Li Ma, and Zhenan Sun, “ A Fast and Robust Iris Localization method based on texture segmentation”, Biometric Technology for Human Identification, Proceedings of SPIE 2004, Vol.5404, pp.401-408, 2004. [29] Asheer Kasar Bachoo, and Jules-Raymond Tapamo, “Texture detection for segmentation of iris images”, Proceedings of Annual Research Conference of the south African Institute of Computer Information Technologists, pp: 236-243, 2005. [30] Pronenca H and Alexandre L A, “Iris segmentation methodology for non-cooperative recognition”, IEEE Proceedings on Vision, Image and Signal Processing, Vol. 153, No.2, pp. 199-205, April 2006. [31] Xiaofu He, and Pengfei shi, “A new segmentation approach for iris recognition based on hand-held capture device”, Pattern Recognition, Vol.40, pp.1326-1333, 2007. [32] Daugman, “New Methods in Iris Recognition”, IEEE Transactions on Systems, Man, and Cybernatics – Part B: Cybernatics, Vol: 37, No: 5, pp: 1167-1175, October 2007. [33] Zhaofeng He, Tieniu Tan, Zhenan Sun, and Xianchao Qiu, “Towards accurate and Fast Iris segmentation for Iris Biometrics”, IEEE Transactions on PAMI, vol. 31,No:9, pp: 1670-1684, 2009.

  ISSN: 0975-5462

1499