A Novel Approach to Face Recognition under Various

0 downloads 0 Views 494KB Size Report
face images under various facial expressions, occlusion and tilt angles. In comparison with .... selected sub-bands of image pre-processed by curvelet transform.
2012 - International Conference on Emerging Trends in Science, Engineering and Technology 143

A Novel Approach to Face Recognition under Various facial expressions, Occlusion and Tilt Angles Madhu

R.Amutha

Research Scholar Sri Chandrasekharendra Saraswathi Viswa Mahavidyalaya University, Kanchipuram, India [email protected]

Professor SSN college of Engineering, Chennai, India. [email protected]

Abstract—The proposed method on face recognition have been focused on color co-occurrence matrix approach & principal component analysis — applies PCA only on CCM classified Images. Traditionally, to recognize the human face, PCA is performed on the whole facial image database, whereas in the proposed method, Color Co-occurrence is used to extract the certain features of an image with different attribute levels, and PCA is applied only for the classified images to recognize face images under various facial expressions, occlusion and tilt angles. In comparison with the traditional use of PCA, the proposed method gives better recognition accuracy and less computational time for different facial expressions; further, the proposed method is analyzed with different orientations of face angles which reduces the computational load significantly when the image database is large.

considered as one of the best feature matching method. This method is invariant to image scaling and rotation, while variant partially to illumination. The researcher [15] combined SIFT along with Singular value Decomposition (SVD), which resulted in high recognition rate for happiness and surprise with lower recognition rates for Anger, Sadness, Fear, Disgust and Neutral.

Keywords-Face recognition; Principal Component Analysis; False Acceptance Rate;False Rejection Rate

I.

INTRODUCTION

Growing interest in the development of human and computer interface and biometric identification, human face recognition has become an active research area. A survey of the research works on facial expression recognition in the last decade can be found in [2,6,8,9,15,23,31,33,34]. Despite of the fact that there has been good progress till now, facial expression recognition with high accuracy remains a challenging task due to the subtlety, complexity and variability of facial expressions [5,9,18,19,20,28,41]. Face Recognition has been carried out intensively in the past and still there is a way for improvement for the researchers in this domain. Most probably the face expression recognition can be done using two approaches, especially, global feature based algorithm and local feature based algorithm. In the first method, the image is pre-processed and segmented to obtain the global features to recognize the face image, while the second one requires the extraction of the local features to recognize the face image due to variations in pose, illumination, rotation, scaling and translation. Recently the facial expression recognition was enhanced using the Scale Invariant Feature Transform (SIFT) [10,12] where it is

ISBN : 978-1-4673-5144-7/12/$31.00 © 2012 IEEE

In the Literature [14,30,16,36,40] the researchers proposed to extract the local and global features using a robust algorithm based on Class-Modular Image Principal Component Analysis (PCA), to recognize face images. This was proposed to enhance or reduce the effects caused due to various facial expressions, variations in pose and illumination. The researcher tested the performance of the algorithm using the ORL, YALE and UMIST databases and concluded that the system accuracy is high because of the increased number of elements in the training set. The Novel approach of Facial Expression Recognition (FER) in the literature [15,25] has embedded the Stochastic Neighbor along with the SVM by testing the performance of the proposed algorithm using JAFEE database and resulted in 58.7 % FER only. Facial classification based on different expressions and angles was proposed for the FERET database only by extracting the global features using the statistical method. The researcher tested using the Mahalanobis and the Euclidean distance for similarity measures. It was found that the Mahalanobis distance method was optimum as compared that of the Euclidean distance [4]. The researcher has analyzed both the PCA and Fisher Discriminant Analysis (FDA) performance for the Face Recognition and found in both the cases the accuracy is over 90%, considering the ORL database but have not taken the effects of the face background and orientation of the head [7, 35]. However, False Acceptance Rate (FAR), False Rejection Rate (FRR), False Rejection (FR), False Rejection Experiments (FRE) and Equal Error Rate (EER) are analyzed in the literature for the justification of the proposed algorithms [7]. A new approach of face recognition was proposed in the literature using the Fourier transform of the Gabor filters and regularizing the linear discriminate analysis.

2012 - International Conference on Emerging Trends in Science, Engineering and Technology 144 It uses the concept of geometric configuration and characterization of gray level change, giving rising to the extraction of the local features resulting in brightness resistant, facial expression and pose [3,22,38,39]. Current FER approaches haven’t considered all the features of the geometric and appearance characteristics of facial expressions. The researcher [21] proposed to use the salient distance features by extracting the patch based 3D Gabor features, and SVM based classification for JAFEE database by measuring the Correction Recognition Rate (CRR) of 92.93%, which is high for this system. It resulted in 94.48% for the Cohn-Kanade (CK) database. In the Literature, PCA along with SVD have been implemented to recognize facial expressions which classifies five different emotions, namely, Angry, Happy, Sad, Disgust and Surprise along with neutral [24,43]. The researcher pre-processed the image, extracted the features, classified and recognized the expressions. An average of 80% recognition rate was achieved with both real time and JAFFE database. SVD was used to extract distinct features. The errors due to reflections in the image have not been eliminated [24]. The common methods for facial feature extraction are geometry based and appearance-based. Geometry-based methods rely on effective and accurate facial feature detection and tracking. Among the appearance-based methods, the potential of Gabor wavelets for recognizing expressions from still images have been established in [5, 26, 27]. Recently, excellent face recognition results were reported in [37] using the new multi resolution analysis method called digital curvelet transform. Generally, face recognition and facial expression recognition are dual problems. Face recognition is made difficult by variety of expressions and expression recognition gets tougher due to the faces varying in age, gender, and ethnicity. The method presented in [37] applied digital curvelet co-efficients to form features for representing the entire face. In order to classify the facial expressions, the local facial [17] information needs to be stored. To obtain the local description of the expressions, local binary patterns (LBPs) are computed using selected sub-bands of image pre-processed by curvelet transform. LBP’s have been used extensively for expression recognition with a good rate of success in [29,32]. Principal component analysis (PCA) has been widely adopted as the most promising face recognition algorithm. Yet still, traditional PCA approach has its limitations: poor discriminatory power and large computational load. In view of these limitations, the proposed method uses the combination of Color Co-occurrence Matrix and PCA operator which increases the overall recognition rate of the system. II.

PROPOSED METHOD

The proposed method uses the combination of Color Cooccurrence Matrix and PCA operator which increases the

ISBN : 978-1-4673-5144-7/12/$31.00 © 2012 IEEE

overall recognition rate of the system. Both CCM and PCA are integrated to increase the efficiency of the recognition system. CCM energy is extracted for all the images in the database and images are sorted in ascending order, so that images with similar energy features as the input image are sorted and the first ten images are stored for performing PCA. PCA is performed to the stored images and the query image. A. Steps Involved in Proposed Method Step1: The input image is browsed and selected. Step2: CCM feature is extracted from the images in the database. Step3: After extracting the CCM feature values from query and database images. The difference in energy values between query and database images are calculated by performing subtraction operation. The database images are sorted in ascending order with reference to the input image and the first ten images are separately stored in a separate database. Step4: Now PCA is performed to those ten images and the result is displayed. Input Image

CCM Energy Extraction

Sorting Images

PCA

Output Images Fig 1: Block diagram for proposed method

The proposed method uses color co-occurrence matrix to classify the images according to the energy feature values. The classified images are then performed with PCA for authentication and verification. B. Feature Extraction based on CCM Assuming color image is divided into N X N image subblock, for anyone image sub-block T(i,j) = (1≤ i ≤ N, 1≤ j ≤ N), using the main color image extraction algorithm to calculate the main color C(i, j) . For any two 4-connected image subT(k ,l )( |i – k| =1 and j = l; or j − l = 1 and i block T(i, j) and = k) , if its corresponds to the main color and in the HSV space to meet the following condition: (1) Cj And Ci belong to the same color of magnitude, that is, its HSV components hi = hj, si = sj, vi = v j.

2012 - International Conference on Emerging Trends in Science, Engineering and Technology 145 (2) Cj And Ci don’t belong to the same color of magnitude, but satisfy si * 3 + vi = s j * 3 + v j , and |hi – hj| = 1; or satisfy hi = hj , si = sj and vi , v j є{0,1} . We can say image sub-block T(i, j) and T(k ,l ) are color connected. According to the concept of color-connected regions, we can make each sub-block of the entire image into a unique color of connected set S = {Ri}(1 ≤ i ≤ M) in accordance with guidelines 4-connected image sub-block. The set S corresponds to the color-connected region. For each color-connected region{Ri}(1 ≤ i ≤ M) , the color components R, G in RGB color space and H in HSV color space are respectively extracted the CCM at the direction δ = 1;θ = 0˚ ,45˚ ,90˚ ,135˚. The same operation is done with I (intensity of the image). The statistic features extracted from CCM are as follows: D

D

i 1

j 1

[m(i, j )] )2

EnergyE E

(2.1)

D

D

i

j

D

D

i

j

j )2 •m(i, j )

(i

ContrastII

(2.2) m(i, j )•log llog[m(i, j )]

EntropyS

(2.3) Where, if m(i, j) = 0᧨log[m(i, j)] = 0 Inverse difference H

D

D

m(i, j ) 1 (i j )2

i j (2.4) Through this method, we can get a 16 dimensional texture feature for component R, G, H and I, each component correspond to four statistic values E, I, S and H. F = [FR, FG, FH, FI] = [fRE ,fRI, fRS, fRI …..fIE, fII, fIS, fIH] Only simple features of image information cannot get comprehensive description of image content. We consider the color and texture features combining not only be able to express more image information, but also to describe image from the different aspects for more detailed information in order to obtain better search results.

C. Sorting of Images After extracting the CCM feature values from query and database images. The difference in energy values between query and database images are calculated by performing subtraction operation. Now the values extracted are sorted in ascending order, which implies that the first sorted image will have a value closer to the value of the query image. D. Principal Component Analysis Principal components analysis (PCA) is a technique that can be used to simplify a dataset; more formally it is a transform that chooses a new coordinate system for the data set such that the greatest variance by any projection of the data set comes to lie on the first axis (then called the first

ISBN : 978-1-4673-5144-7/12/$31.00 © 2012 IEEE

principal component), the second greatest variance on the second axis, and so on. PCA can be used for reducing dimensionality in a dataset while retaining those characteristics of the dataset that contribute most to its variance by eliminating the later principal components. It is traditionally done on a square symmetric covariance or correlation matrix obtained from the given m x n data matrix. A correlation matrix is obtained by normalizing the covariance matrix. Principal components are the Eigen vectors of the square symmetric correlation matrix. PCA is the simplest of the true eigenvector-based multivariate analyses. PCA is closely related to factor analysis; indeed, some statistical packages deliberately conflate the two techniques. True factor analysis makes different assumptions about the underlying structure and solves eigenvectors of a slightly different matrix. The PCA approach is used to reduce the dimensions of the data by means of data compression basics and reveals the most effective low dimensional structure of facial patterns. PCA is mathematically defined as an orthogonal transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. PCA is theoretically the optimum transform for given data in least square terms. For a data matrix, XT, with zero empirical mean (the empirical mean of the distribution has been subtracted from the data set), where each row represents a different repetition of the experiment, and each column gives the results from a particular probe. Given a set of points in Euclidean space, the first principal component (the eigenvector with the largest Eigen value) corresponds to a line that passes through the mean and minimizes sum squared error with those points. The second principal component corresponds to the same concept after all correlation with the first principal component has been subtracted out from the points. Each Eigen value indicates the portion of the variance that is correlated with each eigenvector. Thus, the sum of all the eigen values is equal to the sum squared distance of the points with their mean. PCA essentially rotates the set of points around their mean in order to align with the first few principal components. This moves as much of the variance as possible (using a linear transformation) into the first few dimensions. The values in the remaining dimensions, therefore, tend to be highly correlated and may be dropped with minimal loss of information. PCA is often used in this manner for dimensionality reduction. PCA has the distinction of being the optimal linear transformation for keeping the subspace that has largest variance. This advantage, however, comes at the price of greater computational requirement if compared, for example, to the discrete cosine transform. Nonlinear

2012 - International Conference on Emerging Trends in Science, Engineering and Technology 146 dimensionality reduction techniques tend to be more computationally demanding than Principal Component Analysis (PCA). By using this technique one can able to represent, recognize and also reconstruct the incomplete input test images. Steps involved in the implementation of

the Principal Component Analysis are: Ø Ø Ø Ø Ø Ø Ø Ø Ø

Normalization of individual images Conversion of 2D to 1D Formation of Covariance Matrix Eigenvectors and Eigen values Sorting and Selection of Eigenvectors Projection of each image into the Face space Repeating the same for Test image Discrimination Recognition

of the given image. Having R, G, H and I factors, the 16 dimensional texture features can be extracted. This can be achieved by taking 4 statistical values E (Energy), I (Contrast), S (Entropy), H (Inverse difference) for each factor. Totally 16 dimensional texture features are extracted for Query and each image in the database. The difference in energy values between query and database images are calculated by performing subtraction operation, by which the output images are classified accordingly with the energy values of those images in ascending order and the first 10 images, will be stored in a separate database. Then PCA is applied only for the classified images. Finally using Euclidean distance, the face is recognized. The recognition rate is higher than Wavelet+PCA and Curvelet based PCA method. In figure 4, the query image is browsed and selected.

III. RESULTS AND DISCUSSIONS The face image database used here is JAFFE and GTAV database. These images vary in poses and expressions. The figure.3 & 4 shows sample face images from JAFFE & GTAV database. A. JAFEE Database The database consists of total 213 images of 7 facial expressions (6 basic facial expressions + 1 neutral, frontal view) posed by 10 Japanese female models.

Fig 2: JAFFE Database

B. GTAV Database This database includes a total of 50 images which correspond to different pose views (0º, 30º, 45º, 60º and 90º) under three different illuminations (environment or natural light, strong light source from an angle of 45º, and finally an almost frontal mid-strong light source.

Fig 4: Query Image

In figure 5 the CCM features are extracted and the first ten images are stored in another database. Finally the Euclidian distance is calculated i.e. PCA is performed for the ten images in the new database and the image is recognized. The maximum orientation to the left side, to the right side and tilt in the upward direction, tilt in the downward direction has analyzed, which is shown in the figure 6 & 7, the CCM features are extracted for the images with different tilt angles and the first ten images are stored in another database. Finally the Euclidian distance is calculated i.e. PCA is performed for the ten images in the new database and the image is recognized.

Fig 5: CCM feature extraction and PCA performance for different facial expressions Fig 3: GTAV Database

In the proposed method, the input face image is given to the CCM feature extractor to extract statistical properties of image co-occurrence matrix. As the first step the R and G factor of the color image (RGB Image) is taken. RGB image is then converted in to HIS image to extract the H and I factor

ISBN : 978-1-4673-5144-7/12/$31.00 © 2012 IEEE

Fig 6: CCM feature extraction and PCA performance for different face orientations (right side and left side)

2012 - International Conference on Emerging Trends in Science, Engineering and Technology 147 The above table shows that the time taken by the proposed method is comparatively less when compared with the existing method. TABLE II. Recognition time calculated for the proposed and the existing algorithm for 50 facial images for JAFFE database

Fig 7: CCM feature extraction and PCA performance for different face orientations (tilt in the downward direction)

The proposed method is also tested for the maximum caricature or occlusion that can be tolerated by the recognizing algorithm, which is shown in the figure 8, the CCM features are extracted for first ten images are stored in another database. Finally the Euclidian distance is calculated i.e. PCA is performed for the ten images in the new database and the image is recognized.

Facial Expression Angry Disgust Fear Happy Sad Surprise

Mean recognition time in seconds Proposed method Existing method 3.36 3.15 3.12 3.2 3.1 3.12

3.09 3.11 3.07 3.33 3.06 3.17

The above table shows that the time taken by the proposed method is comparatively less when compared with the existing method. TABLE III. RECOGNITION TIME CALCULATED FOR THE PROPOSED AND THE EXISTING ALGORITHM FOR 72 FACIAL IMAGES FOR JAFFE DATABASE Facial Expression

Fig 8: CCM feature extraction and PCA performance for Face occlusion

C. Performance Analysis In this paper the proposed algorithm is analyzed for parameters like (a) Performance of recognition time and (b) Evaluation recognition rate D. Recognition Time Analysis The proposed method is analyzed with different facial expression to evaluate the performance of recognition time. The analysis table (1 to 4) for different set of database with different facial expressions is given below. JAFFE is being used for the analysis. TABLE I. RECOGNITION TIME CALCULATED FOR THE PROPOSED AND THE EXISTING ALGORITHM FOR 26 FACIAL IMAGES FOR JAFFE DATABASE

Facial Expression Angry Disgust Fear Happy Sad Surprise

Mean recognition time in seconds Proposed method Existing method 4.11 9.07 4.45 9.78 4.05 9.01 4.09 8.89 4.13 9.45 4.43 9.94

ISBN : 978-1-4673-5144-7/12/$31.00 © 2012 IEEE

Angry Disgust Fear Happy Sad Surprise

Mean recognition time in seconds Proposed method Existing method 4.76 13.18 4.87 13.45 3.87 12.98 4.98 13.76 4.76 13.78 5.02 13.95

The above table shows that the time taken by the proposed method is very less when compared with the existing method TABLE 4: RECOGNITION TIME CALCULATED FOR THE PROPOSED AND THE EXISTING ALGORITHM FOR 100 FACIAL IMAGES FOR JAFFE DATABASE

Facial Expression Angry Disgust Fear Happy Sad Surprise

Mean recognition time in seconds Proposed method Existing method 3.58 3.6 3.42 3.56 4.08 3.03

5.53 5.46 5.12 6.02 5.65 5.06

The above table shows that the time taken by the proposed method is drastically reduced as the images in the database are increases. Also it is clearly understood that the recognition time for existing method increases as the number of images increases in the database. The proposed method is analyzed with different occlusion and tilt angles to evaluate the performance of recognition time.

2012 - International Conference on Emerging Trends in Science, Engineering and Technology 148 TABLE V. RECOGNITION TIME CALCULATED FOR THE PROPOSED AND THE EXISTING ALGORITHM FOR 50 FACIAL IMAGES USING GTAV DATABASE Mean recognition time in seconds Image type Proposed method Existing method occluded images

3.08

6.79

The above table shows that the time taken by the proposed method is comparatively less for images with different levels of occlusion when compared with the existing method TABLE VI. RECOGNITION TIME CALCULATED FOR THE PROPOSED AND THE EXISTING ALGORITHM FOR 50 FACIAL IMAGES USING GTAV DATABASE Mean recognition time in seconds Image type Proposed method Existing method Tilted images

3.56

7.8

The above table shows that the time taken by the proposed method is comparatively less even for various tilt angles when compared with the existing method E. Recognition rate Analysis The recognition rate of the three methods is shown in the table.1.The results of the proposed method is higher than Wavelet+PCA and Curvlet+PCA.

Fig.9: Recognition rate for the various methods

It can be seen from the figure 9 that the recognition rate for the proposed is comparatively high compared with other existing methods. IV. CONCLUSION In this paper a novel approach to face recognition under various facial expressions, occlusion and tilt angles has been introduced which experimentally proves that the time taken for the face classification and verification is very less when compared with the existing method and also the recognition rate of the proposed method is higher than Wavelet+PCA and Curvlet+PCA. Face recognition is very important for many applications such as: video surveillance, criminal investigations and forensic applications, secure electronic banking, mobile phones, credit cards, secure access to buildings. For different expressions and poses, the proposed

ISBN : 978-1-4673-5144-7/12/$31.00 © 2012 IEEE

method works well for JAFFE and GTAV database with high recognition rate. REFERENCES A.A. Mohammed, R. Minhas, Q.M.J. Wu, and M.A. SidAhmed, “A novel technique for human face recognition using nonlinear curvelet feature subspace,” in International Conference on Image Analysis and Recognition, LNCS, vol. 5627, pp. 512–521,2009. [2] Aleksic.P.S. and A. K. Katsaggelos. ―Automatic Facial Expression Recognition Using Facial Animation Parameters and Multi-Stream HMMs,ԡ 6th European Workshop on Image Analysis for Multimedia Interactive Services , Montreux, Switzerland, 2005. [3] Anissa Bouzalmat, Arsalane Zarghili, jamal Kharroubi, “Facial Face Recognition Method using Fourier Transform Filters Gabor and R_LDA”, in the International Conference on Intelligent Systems and Data Processing (ICISD) and in the Proceedings published by International Journal of Computer Applications (IJCA), pp. 18-24, 201.1 [4] Ashirbani saha and q.m. Jonathan,”Facial expression recognition using Curvelet based local binary patterns” wu 9781-4244-4296-6/10/$25.00 ©2010 IEEE. [5] C. Shan, S. Gong, and P.W. McOwan, “Facial expression recognition based on local binary patterns: A comprehensive study,” Image and Vision Computing, vol. 27(4), pp. 803–816, 2008. [6] Daw-tung lin. ―Facial Expression Classification Using PCA and Hierarchical Radial Basis Function Networkԡ , in Journal of Information Science and Engineering, Vol . 22, pp 1033-1046, 2006. [7] Erum Naz, Umar Farooq, Tabbasum Naz, “Analysis of Principal Component Analysis-Based and Fisher Discriminant AnalysisBased Face Recognition Algorithms”, IEEE--ICET 2006 2nd International Conference on Emerging Technologies, pp. 121127,2006. [8] Fasel, B. and Luettin, J., “Automatic facial expression analysis”, A survey. Pattern Recognition, Vol.36, and pp.259–275, 2003. [9] G. R. S. Murthy, R. S. Jadon, ―Effectiveness of Eigenspaces for facial expression recognitionԡ , International Journal of Computer Theory and Engineering Vol. 1, No. 5, December 2009 ,1793-8201, pp. 638-642,2009. [10] Hamit Soyel and Hasan Demirel, “Improved SIFT Matching for Pose Robust Facial Expression Recognition”, in Automatic Face & Gesture Recognition and workshops (FG2011), 2011 IEEE International Conference , pp 585-590, 2011. [11] Hong-Bo Deng, Lian-Wen Jin, Li-Xin Zhen, Jian-Cheng Huang,. ―A New Facial Expression Recognition Method Based on Local Gabor Filter Bank and PCA plus LDAԡ , in International Journal of Information Technology Vol. 11 No. 11, 2005. [12] H.Soyel and H.Demirel, “Facial expreression recognition based on discriminative scale invariant feature transform”, IET Electronics Letters, vol. 46, pp. 343-345, 2010. [13] Irene Kotsia, Ioannis Pitas, ― Facial Expression Recognition in Image Sequences Using Geometric Deformation Features and Support Vector Machinesԡ in IEEE Transactions on Image Processing 16(1): pp. 172- 187, 2007. [1]

2012 - International Conference on Emerging Trends in Science, Engineering and Technology 149 [14] Jose Francisco, Rafael M.Barreto, George D.C.Cavalcanti, Tsang Ing Ren, “ A Robust Feature Extraction Algorithm based on Class-Modular Image Principal Component Analysis for Face Verification”, in an IEEE International Conference on Acoustics, Speech and Signal processing (ICASSP), Prague, pg. 1469 – 1472,2011. [15] Jun Ou, Xiao-Bo Bai, Yun Pei, Liang Ma, Wei Liu, ―Automatic facial expression recognition using Gabor filter and expression analysis.ԡ , 2010 Second International Conference on Computer Modeling and Simulation, 2010 IEEE, pp 215-218,2010. [16] J. Yang et al., “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 1, pp. 131–137, 2004. [17] Kakumanu.P., Nikolaos G. Bourbakis, ―A Local-Global Graph Approach for Facial Expression recognitionԡ , ICTAI, pp 685692,2006. [18] Kapoor, A., Picard, R. W., Yuan Qi. , “Fully Automatic Upper Facial Action Recognition”, IEEE International Workshop on Analysis and Modeling of Faces and Gestures, pp.195, 2003. [19] Lee, H. S., Kim. D, “Expression-invariant face recognition by facial expression transformations”, Journal of Pattern Recognition, Volume39, Issue 13, pp. 1797-1805, 2008. [20] Levine, M. D. and Yingfeng Yu. ,“Face recognition subject to variations in facial expression, illumination and pose using correlation filters”, Journal of Computer Vision and Image Understanding, Vol 104, pp. 1-15, 2006. [21] Ligang Zhang, Dan Tjondronegoro, “ Facial Expression Recognition using Facial Movement Features”, in the IEEE Transactions on Affective Computing, Vol. 2 , No.2, 2011. [22] Liu Xiaomin, Zhang Yujin, “Facial Expression Recognition Based on Gabor Histogram Feature and MVBoost”, J.Journal of Computer Research and Development, Vol. 44, No. 7 pp. 10891096, 2007. [23] Mahananda D.Malkauthekar ,“Classification of Facial Images”, in the IEEE International Conference on Emerging trends in Electrical and Computer Technology (ICETECT), pg. 507- 511, 2011. [24] Mandeep Kaur, Rajeev Vashisht, Nirvair Neeru, “ Recognition of Facial Expressions with Principal Component Analysis and Singular Value Decomposition”, in the International Journal of Computer Applications, pp. 36 – 40, Vol. 9 , No. 12, 2010. [25] Mingwei Huang, Zhen Wang, Zilu Ying, “ Facial Expresssion Recognition Using Stochastic Neighbor Embedding and SVMs”, in the Proceedings of 2011 International Conference on System Science and Engineering (ICSSE 2011), Macau, China, pg. 671-674, June 2011. [26] M.S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan, “Recognizing facial expression: Machine learning and application to spotaneous behavior,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2005, vol. 2, pp. 568–573, 2005. [27] Nectarios Rose, ―Facial Expression Classification using Gabor and Log- Gabor Filtersԡ , Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition (FGR’06) IEEE computer society. [28] Pantic, M. and Rothkrantz, L., “Automatic analysis of facial expressions: The state of the art”, IEEE Transactions on Pattern

ISBN : 978-1-4673-5144-7/12/$31.00 © 2012 IEEE

Analysis and Machine Intelligence, Vol. 22, No. 12, pp.1424– 1445,2000. [29] Raymond S. Smith, Terry Windeatt, Facial expression detection using Filtered local binary pattern features with ECOC Classifiers and Platt scaling, JMLR Workshop and Conference proceedings 11, pp. 111-118, 2010. [30] R. Gottumukkal and V.K. Asari, “An improved face recognition technique based on modular PCA approach,” Pattern Recognition Letters, vol. 25, no. 4, pp. 429–436, 2004. [31] Ruicong Zhi, Qiuqi Ruan, "A Comparative Study on RegionBased Moments for Facial Expression Recognition,", in Congress on Image and Signal Processing, Vol. 2, pp.600-604, 2008. [32] S. Liao, W. Fan, C.S. Chung, and D.Y. Yeung, “Facial expression recognition using advanced local binary patterns, tsallis entropies and global appearance features,” in IEEE International Conference on Image Processing, pp. 665–668, 2006. [33] Spiros Ioannou, George Caridakis, Kostas Karpouzis, and Stefanos Kollias, ―Robust Feature Detection for Facial Expression Recognitionԡ ,EURASIP Journal on Image and Video Processing, vol. 2007, Article ID 29081, 2007. [34] S.P. Khandait, P.D. Khandait and Dr.R.C.Thool, ―An Efficient approach to Facial Feature Detection for Expression Recognitionԡ , International Journal of Recent Trends in Engineering, Vol 2, No. 1, November 2009,PP. 179-182, 2009. [35] Supriya Kapoor, Shruti Khanna, Rahul Bhatia, “Facial Gesture recognition using correlation and Mahalanobis distance”, in IEEE International Journal of Computer Science and Information Security, Vol. 7, No. 2, 2010. [36] T. Kim, H. Kim, W. Hwang, S. Kee, and J. Kittler, "Independent Component Analysis in a Facial Local Residue Space," IEEE Proc., CVPR, Madison, USA, July, 2003. [37] T. Mandal, A. Majumdar, and Q.M.J. Wu, “Face recognition by Curvelet based feature extraction,” in International Conference on Image Analysis and Recognition, LNCS, vol. 4633, pp. 806– 817, 2007. [38] W.Hwang, G.Park, J.lee and S.C.Kee, Multiple Face Model of Hybrid Fourier Feature for Large Face Image Set, International Conference on Computer Vision and Pattern Recognition, pp. 1574 – 1581, 2006. [39] Xiaoyang Tan and Bill Triggs, Enhanced Local Texture Feature Sets for Face Recognition under difficult lighting conditions, IEEE transactions on Image Processing, vol.19, No.6, June 2010. [40] X. Wang and X. Tang, “Random Sampling LDA for Face Recognition”,IEEE Proc., CVPR, Washington, D.C., USA, Jun. 2004. [41] Yu Su, Shinguang Shan, Xilin Chen, Wen Gao, Hierarchial Ensemble of Global and Local Classifiers for Face Recognition, IEEE 2007.