Benchmarking Asymmetric 3D-2D Face Recognition Systems

0 downloads 0 Views 949KB Size Report
Xi Zhao, Wuming Zhang, Georgios Evangelopoulos, Di Huang, Shishir K. Shah, Yunhong Wang, ... University of Houston, Houston, TX 77204-3010, USA.
Benchmarking Asymmetric 3D-2D Face Recognition Systems Xi Zhao, Wuming Zhang, Georgios Evangelopoulos, Di Huang, Shishir K. Shah, Yunhong Wang, Ioannis A. Kakadiaris and Liming Chen Abstract— Asymmetric 3D-2D face recognition (FR) aims to recognize individuals from 2D face images using textured 3D face models in the gallery (or vice versa). This new FR scenario has the potential to be readily deployable in field applications while still keeping the advantages of 3D FR solutions of being more robust to pose and lighting variations. In this paper, we propose a new experimental protocol based on the UHDB11 dataset for benchmarking 3D-2D FR algorithms. This new experimental protocol allows for the study of the performance of a 3D-2D FR solution under pose and/or lighting variations. Furthermore, we also benchmarked two state of the art 3D2D FR algorithms. One is based on the Annotated Deformable Model (using manually labeled landmarks in this paper) using manually labeled landmarks whereas the other makes use of Oriented Gradient Maps along with an automatic pose estimation through random forest.

I. INTRODUCTION Face recognition (FR) has a wide application potential and scientific challenges, and thus it has become a popular topic in both computer vision and biometrics communities. Recognition based on two dimensional facial images, as the typical input in face recognition (FR) [19], has encountered many challenges including variations in illumination, pose, facial expression, occlusion and image resolution. Recently, 3D-based FR systems have demonstrated their superiority to better handle those problems [5], especially for illumination and pose variations. However, these approaches are not directly deployable due to the difficulty and cost in deploying 3D acquisition devices. More recently, asymmetric 3D-2D FR has gained increasing interest. In such a scenario, enrollment is achieved using textured 3D faces whereas identification is performed using only 2D facial images or vice versa. The benefits of asymmetric FR lies in two aspects. First, the use of 3D scanner is not required during the identification. This avoids the deployment of high-cost equipment in the field. Second, 2D facial signatures can gain robustness and discriminative power through the use of 3D models [13][23]. This paper presents a novel experimental protocol on the UHDB11 dataset [24] for benchmarking 3D/2D FR solutions. It allows for the study of the various behaviors of a 3D/2D algorithm under illumination and/or pose variations. Using the proposed protocol, we further evaluated two state of the art 3D/2D algorithms which we obtained from X. Zhao, G. Evangelopoulos, S. K. Shah and I. A. Kakadiaris are with Computational Biomedicine Lab, Department of Computer Science, University of Houston, Houston, TX 77204-3010, USA W. Zhang and L. Chen are with the LIRIS Lab, Ecole Centrale de Lyon, 69134 Cedex, France D. Huang and Y. Wang are with the Laboratory of Intelligent Recognition and Image Processing, School of Computer Science and Engineering, Beihang University, Beijing, China

from Computational Biomedicine Laboratory and Laboratoire d’InfoRmatique en Image et Systemes d’information groups. The first 3D-2D framework, named UR2D, is based on a modality synergy, in which a 3D model is used for registration, alignment and pose and light normalization of 2D image and texture data [13]. Specifically, for each gallery, a subject-specific, non-parametric 3D facial model is built based on fitting an Annotated Face Model (AFM) [8] to the raw 3D data. The corresponding texture to the 3D model is wrapped from the 2D texture image in 2D UV space using the UV surface parametrization of the AFM. In the recognition phase, point-landmark correspondences are used to estimate the 3D-2D projection transformation first between a fitted model and the input texture. Then a posed-normalized texture image is generated in UV space with the aid of the model. For both probe and gallery textures, lifted using the same 3D model, are normalized with respect to their illumination using a Bidirectional Relighting approach [1]. Finally, the gradient orientation correlation based matching score [14] between the relit gallery and probe texture pair is computed. Instead of lifting both gallery and probe textures into a canonical 2D UV space, the second method aligns the pose for each pair of gallery and probe set. It first estimates the face pose status using a Random Regression Forest, and then rotates the 3D face models in the gallery set to the one of the probe pose [20]. A set of Oriented Gradient Maps [21] are generated as distinctiveness enhanced intermediate facial descriptions, and their LBP histogram-based features are then fed into Sparse Representation Classifier [16]. The paper is organized as follows: Section II overviews related work. The two methods are detailed in Section III. Section IV describes the novel experimental protocol and presents the experimental results and comparison. Section V concludes the paper. II. L ITERATURE R EVIEW The literature in 3D and 2D+3D FR has rapidly increased in recent years [2]. A survey of methods that combine the two modalities is provided by Bowyer et al. [5]. However, compared to the large number of investigations made within the field of 3D FR, only a few tasks in the literature have addressed the problem on asymmetric 3D-2D FR. Yin and Yourst [18] constructed shape models from frontal and profile 2D images to conduct 3D FR. Rama et al. [10] presented the Partial Principle Component Analysis (P2CA) for feature extraction and dimensionality reduction on both the cylindrical texture representation (3D) in the gallery and 2D images in the probe. In their work, pose

estimation and 2D FR are performed simultaneously with the aid of 3D data. Riccio and Dugelay [11] established a correspondence between the 3D gallery face and the 2D probe using geometric invariants on the face. However, several control points are employed during this process, which requires an accurately landmarking method. Yang et al. [17] implemented a patch based Kernel CCA to learn the mapping between facial range and texture images in the gallery and probe sets respectively. But their gallery set only contained shape information, and the original intensity and depth information (pixel values in the facial range and texture image) cannot comprehensively describe variations of facial appearances. Blanz and Vetter [4] introduced the 3D morphable model and fit this deformable face model to 2D images via minimizing the residual of image difference. This framework was adopted in various methods for 2D FR under pose and illumination based on 3D models [12]. Wang et al. [15] used a spherical harmonic representation [3] to extend this model so that the illumination difference between 3D face texture and 2D face texture can be aligned. Morphable model based methods use a 2D image to synthesize its 3D model via deforming the 3D statistical model with prior variation knowledge learnt from training data. This is different from the UR2D system which uses 2D+3D data to build a 3D subject-specific model for the gallery. In contrast, methods for asymmetric recognition learn a mapping between 3D and 2D data. Huang et al. proposed an asymmetric 3D-2D FR approach [23] which performs not only a 2D-2D matching between 2D facial texture images in the gallery and probe sets, but also 3D-2D matching between facial depth maps (3D information) and facial texture images in the gallery and probe set respectively. Both similarity measurements are further combined at the score level for final decision making. They further enhanced their method by a preprocessing pipeline for illumination normalization and pose correction [20] as well as Oriented Gradient Maps (OGMs) based facial representation [21], and reported a rank-one recognition rate of 95.4% on the entire FRGC v2.0 database which is better than that of 2D FR techniques and not far behind the best state of the art performance in 3D FR on FRGC v2.0. Toderici et al. [13] used an AFM [8] -based asymmetric FR pipeline which is capable of handling images that have variations in head pose and illumination. All these 3D-2D algorithms need to be benchmarked and compared for further improvement. While FRGC v2.0 offers such an interesting possibility with a significant gallery size, it only contains nearly frontal 3D face models with rather friendly illumination changes. This paper fills such a gap and proposes a novel experimental protocol using UHDB11 for the evaluation of 3D-2D FR algorithms under lighting and/or pose changes. III. M ETHODS In 3D-2D FR, 3D face data are 3D shape meshes (triangulated 3D surface points) along with the associated 2D

textures. Two-dimensional facial images contain in general unaligned face of unknown subject identity. In this section, two approaches that tackle the asymmetric facial data in gallery and probe via different perspectives are presented. The main idea of UR2D pipeline is the registration of 3D mesh’s texture and a 2D image into a two dimensional pose-illumination-normalized UV space with aid of the 3D shape. On the other hand, the LIRIS pipeline computes the correspondence between OGM response of rotated gallery and OGM response of 2D texture. A. UR2D Pipeline The gallery creates the metadata for each subject. For the 2D+3D enrollment, each 3D mesh is first fitted by AFM so that a set of 3D landmarks is detected and UV parametrization is computed to build a point-to-point correspondence between 3D coordinate system and 2D texture. The enrollment process is depicted in Alg. 1. Algorithm 1 Enrollment with 3D+2D data (UR2D-E) Input: 3D facial mesh, 2D facial image, 3D model (AFM). Output: fitted mesh, texture image, visibility. Pre-process the 3D facial mesh. Locate 3D landmarks on the raw mesh data. Register (align) AFM to the 3D facial mesh. Fit AFM to the 3D facial mesh. Lift texture from the 2D facial image using the fitted AFM and estimated 3D-2D projection. 6. Store fitted AFM, alignment transform, 3D landmarks, lifted texture, and visibility map as metadata. 1. 2. 3. 4. 5.

During the verification or identification (Algorithm 2), the input to the pipeline is a 2D image. Nine facial landmarks, which correspond to the nine 3D landmarks stored for each gallery dataset, are located on the image. Using these two sets of corresponding landmarks, the facial pose and camera parameters are estimated based on a full perspective projection model. The intermediate results are the probe image texture which is mapped onto the 2D UV space along with a visibility map under the estimated face pose and projection. In order to match the illumination of the lifted probe texture, an bi-directional reflectance distribution function based relighting scheme is used to align the gallery texture to the lighting condition on the probe texture using the fitted mesh [1]. Finally a global, correlation-based similarity metric is extracted from local gradient orientations of the pose- and light-normalized textures [14]. As depicted Fig. 1, the system contains six major modules, including the face detection module, the AFM fitting module, the pose estimation module, the texture lifting module, the illumination normalization module and the score computation module. The face detection and landmarking module detects the face ROI region and landmarks on the probe images. The

Fig. 1.

Flowchart of the UR2D pipeline.

Algorithm 2 Recognition using 2D images (UR2D-R) Input: 2D facial image and candidate gallery subject ID. Output: similarity score. 1. Retrieve the candidate fitted AFM data from gallery. 2. Locate nine reference landmarks on the 2D facial image. 3. Register AFM to the 2D facial image using 3D-2D landmark correspondences (Pose Estimation) 4. Lift texture from 2D image using 3D-2D projection of the fitted AFM. 5. Bidirectionally relight the gallery texture to match the probe 2D facial texture. 6. Compute gradient orientation correlation between relit gallery and probe texture. 7. Threshold score for a verification decision or retrieve closest gallery match for identification.

PittPatt face detector [9] is used to detect the face ROI and a coarse estimation of head pose. A set of nine landmarks are then located based on polygonal sub-shapes and aggregated fern regression [6]. The AFM fitting module performs a non-rigid registration of a 3D face scan using the generic AFM model through a continuous, global UV parametrization. The model parametrization has the properties of being injective and area-preserving. By UV parametrization from the 3D mesh to the geometry image plane, any 3D surface point is associated with a location on the 2D UV space. More importantly, any mesh fitted to the AFM model, inherits this predefined parametrization and, by using the same reference 2D triangulation, the same geometry image grid. The pose estimation module takes the set of landmarks on a 3D gallery mesh and the set of landmarks on a probe image as input. It estimates the projection matrix and camera parameters by minimizing the distances between the 2D landmarks and the projection of 3D landmarks using

Levenberg-Marquadt (LM) algorithm [13]. The texture lifting module takes the 3D-2D alignment parameters (projection matrix and camera parameters), the gallery mesh and the probe image as input, projects the mesh onto the image using the alignment parameters. It then takes advantage of the predefined point-to-point correspondence between the 3D mesh and the 2D UV space to warp the texture from the image into the UV space. For the UV texture, each pixel has been associated with a 3D coordinate and normal interpolated from the input mesh. This facilitates the following lighting computation. The illumination normalization module takes the mesh and the texture as the input, brings the illumination of the probe texture and gallery texture into the same condition, making them suitable for comparison with the effect of lighting suppressed. The Lambertian BRDF is used to model the diffuse component ID and the Phong BRDF to model the specular component IS . The ambient component IA is model as one scale value. In order to match the light conditions of a gallery texture to that of a probe texture, the following metric is minimized: 0 0 0 0 MT − Is D = MT − Is − (Id + Ia ) , (1) Id + Ia where Ia , Id , and Is are the lighting components of the gallery; Ia0 , Id0 and Is0 are the lighting components of the probe, and MT , MT0 is the gallery and probe texture. The optimization process is to minimize the difference D between gallery and probe texture by varying the parameters in Ia , Id , Ia , Ia0 , Id0 and Is0 . The score computation module takes a pair of lifted textures as input, computes a global similarity metric from the correlation coefficient on image gradient orientations [14]. We choose this metric due to its insensitivity to serious mismatches in parts of two images. Toderici et al. [13] relights each gallery lighting condition to each probe lighting condition.

B. LIRIS Pipeline

Fig. 3.

Fig. 2.

Flowchart of the LIRIS pipeline.

As illustrated in Fig. 2, the pipeline proposed by Liris encompasses the following modules: retina inspired lighting normalization, random forest-based pose estimation, aligned 2D face generation, OGM-LBP-based face representation and matching. The key idea is to align a 3D gallery face model with an input 2D probe face image using the pose estimated from the latter one through random forests. 1) Retina inspired lighting normalization: As probe face images are simple 2D face images with pose and lighting variations, a very first step is to perform lighting normalization. In this work, we simply made use of a state of the art retina inspired technique which performs twice gaussian kernels [22]. 2) Random forest-based pose estimation: To correctly match gallery face models with a 2D probe face image, the pose of the latter needs first to be estimated for face alignment. For this purpose, a set of 3D face models are used to generate 2D texture images under various viewpoints and these images are further used to train random regression forests which, given an input 2D texture face image, perform then the pose estimation [20].

Training of random forests for pose estimation.

3) Aligned 2D face generation: In this baseline, a gallery 3D face model is first rotated using the pose estimated from the input 2D probe face image for alignment. A 2D texture face image is then generated from the rotated 3D gallery face model for matching. 4) Generation of OGMs: Instead of performing 2D/2D matching using the pair of aligned 2D texture face images, we firstly generate OGM features to highlight the distinctiveness of 2D texture face images. OGMs [21], namely Oriented Gradient Maps, simulate the response of complex neurons to gradient information within a given neighborhood, and are able to describe local texture changes of 2D facial maps and local shape changes of 3D facial maps at the same time. In this process, given a raw input image I and a quantized direction o, we first compute its gradient map: G0 = (δ I/δ o)+

(2)

where + sign denotes that only positive values are kept to preserve the polarity of the intensity changes, while the negative ones are set to zero. We then simulate the response of complex neurons by convolving its gradient maps with a Gaussian kernel G. The standard deviation of the Gaussian kernel G is proportional to the radius of the given neighborhood area R: ρoR = GR ∗ Go

(3)

After combining and normalizing all gradients of different quantized direction at one point, the response vector of OGMs could be constructed.

5) 2D LBP-based facial representation: The last module of the proposed pipeline is to generate LBP-based facial representations from the various OGMs before matching. Specifically, we first divide the whole face into several subregions from which LBP based histograms are extracted; then all these local histograms are then combined to form a global one as the final facial representation. 6) SRC Matching: For matching of a gallery face with a probe face, we have chosen SRC and used the reconstruction error as similarity score. The LBP operator is applied using three different scales, including (8,1), (16,2) and (24,3) and their scores are then fused through simple sum rule to deliver final similarity scores. IV. E XPERIMENTAL R ESULTS

(a)

We first introduce the UHDB11 dataset and then describe the experimental protocol along with the results of the two respective pipelines. The UR2D pipeline made use of manually labeled landmarks whereas the LIRIS pipeline automatically estimated the pose of a 2D face image through random forest. A. 3D-2D database with pose and illumination Existing textured 3D face databases were primarily captured and used for 3D-3D or 3D+2D experiments. To account for the illumination and pose challenges for 3D2D experiments, we use the UHDB11 database. 3D data were captured using a 3dMDTM two-pod optical scanner and 2D data using a commercial CanonTM DSLR camera. It contains 3D facial scans and 2D images from 23 different subjects captured in the indoor scenario. Facial data were acquired under six illumination conditions, four yaw rotations, and three roll rotations per subject. The head pose varies from ±50 degrees in the roll direction, and ±30 degrees in the pitch direction. Figure 4 depicts the variability within samples from a subject probe set of the database. B. 3D-2D FR experimental protocol and results Exp. 1: 3Dvs2D-single-1 The first experiment is designed to evaluate FR systems under the full scope of the head pose and illumination variability within the dataset with single instance in the gallery. The gallery contains only one textured 3D mesh per subject under a moderate pose and illumination condition (23 textured 3D meshes in total). All the other 2D images are included in the probe set (1,602 images in total). In Fig.8, the UR2D pipeline achieved 68.35% verification rate (VR) at 10−3 false alarm rate (FAR) while the LIRIS pipeline achieved 53.26% VR at 10−3 FAR. Exp 2: 3Dvs2D-multiple-2 The second experiment is designed to evaluate FR systems under the full scope of the head pose and illumination variability with multiply instances in the gallery. The gallery contains six textured 3D meshes per subject under different pose and illumination conditions (138 textured 3D meshes in total). All the other

(b) Fig. 5. A depiction of ROC curves for Exp. 1 from (a) UR2D Pipeline; (b) LIRIS Pipeline

2D images are included in the probe set (1,487 images in total). Figure 6 depicts the ROC curves obtained from both pipelines. The UR2D pipeline achieved 75.91% VR at 10−3 FAR while the LIRIS pipeline achieved 54.80% VR at 10−3 FAR. Exp 3: 3Dvs2D-lighting-3 The third experiment is designed to evaluate the influence of the lighting condition on 3D-2D FR system. It uses the same gallery as in the experiment 3Dvs2D-single-1 (23 textured 3D meshes in total). All the other 2D images with the head pose same as the one in the gallery are included in the probe set (250 images in total). In Fig. 7, the UR2D pipeline achieved 89.02% VR at 10−3 FAR while the LIRIS pipeline achieved a 61.75% VR at 10−3 FAR. Exp 4: 3Dvs2D-pose-4 The forth experiment is designed to evaluate the influence of the head pose condition on 3D-2D FR system. The gallery is also the same as the experiment 3Dvs2D-single-1. All the other 2D images with the lighting condition same as the one in the gallery are included in the probe set (115 images in total). Figure 6 depicts the ROC curves obtained from both pipelines. The UR2D pipeline achieved 68.07% VR at 10−3 FAR while

Fig. 4.

Examples from UHDB11 database with variation of lighting and pose.

(a)

(a)

(b)

(b)

Fig. 6. A depiction of ROC curves for Exp. 2 from (a) UR2D Pipeline; (b) LIRIS Pipeline

Fig. 7. A depiction of ROC curves for Exp. 3 from (a) UR2D Pipeline; (b) LIRIS Pipeline

the LIRIS pipeline achieved a 57.58% VR at 10−3 FAR. Table I synthesizes the verification rates of the two pipelines at 10−3 FAR for all the four experiments. Table II depicts the identification rate at rank one of two pipelines tested in all the experiments. C. Discussion In general, the UR2D pipeline performs better than the LIRIS pipeline in all experiments in terms of both identification and verification rates. The main reason is the UR2D

pipeline uses nine manual landmarks as the input of the pose estimation module, while the LIRIS pipeline does not use these manual information in their random forest based pose estimation. Using the results in Exp.1 as a baseline, both pipelines improve their performance when more scans are included in the gallery, as in the setup for Exp.2. Comparing the results from Exp.3 with those of Exp.1, UR2D pipeline demonstrate an advantage of normalizing the illumination variations. Over 20% improvement in VR is achieved by

TABLE II I DENTIFICATION R ATE AT R ANK ONE OF TWO PIPELINES IR(%) UR2D Pipeline LIRIS Pipeline

Exp.1 85.33 73.66

Exp.2 98.05 76.90

Exp.3 94.08 83.33

Exp.4 85.22 80.22

R EFERENCES

(a)

(b) Fig. 8. A depiction of ROC curves for Exp. 4 from (a) UR2D Pipeline; (b) LIRIS Pipeline TABLE I V ERIFICATION R ATE AT 10−3 FAR OF TWO PIPELINES VR(%) UR2D Pipeline LIRIS Pipeline

Exp.1 68.35 53.26

Exp.2 75.91 54.80

Exp.3 89.02 61.75

Exp.4 68.07 57.58

the UR2D pipeline while around 10% improvement in VR is achieved by the LIRIS pipeline. Meanwhile, the LIRIS pipeline demonstrates an advantage of handling the pose variations when comparing the results of Exp.4 and Exp.1. They have achieved around 8% improvement in VR versus no improvement by UR2D pipeline in both VR and IR. V. C ONCLUSIONS In this paper we introduced a novel experimental protocol on UHDB11 which permits the study of the various behaviour of a 3D-2D FR algorithm under lighting and/or pose changes. Using the proposed experimental protocol, we further benchmarked two state of the art 3D-2D systems and discussed their performance. The experimental results show that there are much room for further improvement, both in terms of lighting normalization and pose correction with the aid of 3D face data.

[1] G. Toderici, G. Passalis, T. Theoharis, and I.A. Kakadiaris. An automated method for human face modeling and relighting with application to face recognition. In Proc. Workshop of Photometric Analysis for Computer Vision, Rio de Janeiro, Brazil, Oct. 14 2007. [2] A. Abate, M. Nappi, D. Riccio, and G. Sabatino. 2D and 3D face recognition: A survey. Pattern Recognition Letters, 28(14):1885 – 1906, 2007. [3] R. Basri and D. Jacobs. Lambertian reflectance and linear subspaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(2):218–233, Feb. 2003. [4] V. Blanz and T. Vetter. Face recognition based on fitting a 3D morphable model. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(9):1063–1074, 2003. [5] K. Bowyer, K. Chang, and P. Flynn. A survey of approaches and challenges in 3D and multi-modal 3D+2D face recognition. Computer Vision and Image Understanding, 101(1):1–15, January 2006. [6] B. Efraty, C. Huang, S. Shah, and I. Kakadiaris. Facial landmark detection in uncontrolled conditions. In Proc. International Joint Conference on Biometrics, pages 370–376, Washington, DC, Oct. 11-13 2011. [7] D. Huang, M. Ardabilian, Y. Wang, and L. Chen. Automatic asymmetric 3D-2D face recognition. In Proc. 20th International Conference on Pattern Recognition, pages 1225–1228, Istanbul, Turkey, August 23-26 2010. [8] I. Kakadiaris, G. Passalis, G. Toderici, M. N. Murtuza, Y. Lu, N. Karampatziakis, and T. Theoharis. Three-dimensional face recognition in the presence of facial expressions: An annotated deformable model approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(4):640–649, 2007. [9] Pittsburgh Pattern Recognition. PittPatt face recognition software development kit (PittPatt SDK) v5.2. http://www.pittpatt.com, March 2011. [10] A. Rama, F. Tarres, D. Onofrio, and S. a. Tubaro. Mixed 2D3D information for pose estimation and face recognition. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing, pages III – 776–9, 11-14 Sept. 2006. [11] D. Riccio and J.-L. Dugelay. Geometric invariants for 2D/3D face recognition. Pattern Recognition Letters, 28(14):1907–1914, 2007. [12] S. Romdhani, J. Ho, T. Vetter, and D. J. Kriegman. Face recognition using 3-D models: Pose and illumination. Proceedings of the IEEE, 94(11):1977–1999, Nov 2006. [13] G. Toderici, G. Passalis, S. Zafeiriou, G. Tzimiropoulos, M. Petrou, T. Theoharis, and I. Kakadiaris. Bidirectional relighting for 3D-aided 2D face recognition. In Proc. IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2010, pages 2721–2728, San Francisco, CA, June 13-18 2010. [14] G. Tzimiropoulos, V. Argyriou, S. Zafeiriou, and T. Stathaki. Robust FFT-based scale-invariant image registration with image gradients. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(10):1899 – 1906, 2010. [15] Y. Wang, L. Zhang, Z. Liu, G. Hua, Z. Wen, Z. Zhang, and D. Samaras. Face relighting from a single image under arbitrary unknown lighting conditions. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(11):1968–1984, 2009. [16] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(2):210–227, February 2009. [17] W. Yang, D. Yi, Z. Lei, J. Sang, and S. Li. 2D-3D face matching using cca. In Proc. Automatic Face Gesture Recognition, 2008, pages 1–6, Sept. 2008. [18] L. Yin and M. Yourst. 3D face recognition based on high-resolution 3D face modeling from frontal and profile views. In Proc. ACM SIGMM Workshop on Biometrics Methods and Applications, pages 1–8, New York, NY, Nov. 8 2003. [19] W. Zhao, R. Chellappa, P. Phillips, and A. Rosenfeld. Face recogni-

[20] [21] [22] [23]

[24]

tion: A literature survey. ACM Computing Surveys, 35(4):399–458, 2003. W. Zhang, D. Huang, Y. Wang and L. Chen 3D-Aided Face Recognition across Pose Variations. Chinese Conference on Biometric Recognition, 2012. D. Huang, M. Ardabilian, Y. Wang and L. Chen Oriented Gradient Maps based Automatic Asymmetric 3D-2D Face Recognition. International Conference on Biometrics, 2012. N.S. Vu and A. Caplier Illumination-robust face recognition using retina modeling. International Conference on Image Processing, Cairo, 2009. Di Huang, Mohsen Ardabilian, Yunhong Wang, and Liming Chen Asymmetric 3D/2D face recognition based on LBP facial representation and canonical correlation analysis. International Conference on Image Processing, page 3325-3328, Cairo, 2009. UH Computational Biomedicine Lab: UHDB11 face database http://cbl.uh.edu/URxD/datasets/ (2009)