EMOTIONAL BASED FACIAL EXPRESSION RECOGNITION USING SUPPORT VECTOR MACHINE

- Automatic facial expression recognition (FER) is an interesting and challenging research problem as it has many important applications, such as human behavior recognition, sign language recognition, human–computer interaction, and so on. Human Computer Interaction (HCI) is one of the upcoming technologies which effectively develop the human computer interface system also provide the interaction between the computer and user. The emotions are recognized by several methods, but the facial expression based emotion analyze process is most importance research field. The major challenges of the facial expression based emotion recognition process are to capture the human face because it has been varied according to their movement. So, the facial image capturing and other relevant processing is successfully developed by using the different image processing and machine learning concepts. . Emotions are reflected on the face, hand, body gesture and voice to express our feelings. In human communication, the facial expression is understanding of emotions help to achieve mutual sympathy. It is a non verbal communications, facial expression based emotion recognition process is one of the easiest method because it does not require high cost, easy to capture the face expression with the help of the digital camera, minimize the computation complexity also the impact of the facial expression is related with the brain activities and social impacts. There are 100 types of facial expressions are express by human. These facial expressions are derived from the basic expressions such as Happy, Sad, Anger, Disgust, Surprise, Fear using CK dataset using Support vector machine(SVM).


INTRODUCTION
The automatic facial expression analysis process which has been worked by using three steps such as face detection, facial feature extraction and classification. The first step is face detection, in which, most of the existing algorithm effectively examine the frontal view of the face using the predefined conditions but the exact location of the face has been difficult to detect because of the complex scene as well as participants movements. In addition to this, the captured facial images may be affected by the illumination changes, head motions, occlusions and variation of the facial features. Along with this, the automatic face detection process is further complicated due to the hair, jewelry, glasses and so on. So, the automatic face detection process is need to be improved by using the several optimization methods. After detecting the face from the captured images, different facial expressions must be detected. The facial expression features are may be quasi textural, relative displacements of features, hue changes of skins and other course information. These features are retrieved depending on their face expressions also mood of the people. Then the extracted features are fed into the most of the classifiers for recognizing their facial expression based emotions. According to the discussions, the automatic facial expression emotions analyze and recognize process is explained as follows.

PROPOSED WORK
In this proposed work, Cohn Kanade database is used to recognize the facial expression related emotion using SVM. The dataset consists of collections of images which were captured in various emotions that used to detect the emotions in automatic way. The database images are used to for both training and testing process which is done by utilizing the different image processing techniques. Then the sample captured database image is shown in the figure 1.

FACE DETECTION
Face detection is the process of extracting the faces from input images.

Face Detection using Haar-Like Feature
The first step is face detection which is done by applying the Haar like feature that successfully examine the captured images in terms of calculating the intensity value because the estimated intensity value does not affected by skin color and other relevant features. Also, the haar-like features are effectively detect the face because it has been trained by the several illuminations and detect the face in frontal upright direction up to 20 degree rotation with any axis. Based on the above process, the detected face image is shown in figure 3. The detected face image is fed into the next feature extraction step for retrieving the various features which helps to detect the exact emotions.

Facial Point Location using point distribution model
From the help of the facial point features, point distribution model (PDM) has been created which helps to improve the feature training process. Developed PDM only uses the frontal face information which helps to recognize the emotion with effective manner also meet the entire PDM requirements with successful manner. Based on the above process the normalized facial point has sixteen vector lengths which are defined as follows. z x , y , … … . x , y ,……………………………………………………………………. (1) According to the normalization process, examined facial expression has been grouped into single matrix as follows.
In the eqn (2), n is represented as the total number of facial expressions, then the covariance matrix is calculated as follows, The estimated covariance matrix is written by using singular value decomposition (SVD) matrix which is represented as follows.

FEATURE EXTRACTION
The feature extraction which is done by using the local binary pattern and progression invariant sub space learning method. First the local binary pattern process is applied to the image, that analyze each and every pixel present in the image and the particular operator is assigned to each pixel. Finally the features are extracted (eye, eyebrows, nose and mouth) for the purpose of classifying the human expressions.

Feature Extraction using Geometric Features
The three facial components such as eyebrow, eye and mouth has been considered while deriving the features. Almost eight facial points are extracted from the facial component which are listed as follows, P p , p , p , p , p , p , p , p …………………………………………….(6) In the above eqn, p , p is the facial points which are derived from eyebrow p , p is the facial points which are extracted from eye corners p , p , p , p is the mouth releated facial points These facial points are helps to extract the six basic emotions related features from the CK+ database.. The extracted feature information is shown in following figure 4.
Along with the statistical measures, geometrical features are extracted from the detected image. During this process scale invariant features are evaluated and normalized for based on the distance measure. Then the d and d is represented as the distance of two mirrored right and left side of face. The extracted feature set produces the vector length has 22, it has been normalized as follows.
f i=1,……22.……………………………………………………………… (7) In the above eqn (7), f is the feature vectors and f is the normalized feature vector. μ is mean of the feature vector and σ is the standard deviation of feature vector Based on the above process the extracted facial point region has been shown in figure 4. After deriving the facial points, it location has been detected by point distribution model which is explained as follow

EXPRESSION CLASSIFICATION
The computed training sample is used for recognizing the emotions by using s support vector machine classifier.

Support Vector Machine
The support vector machine is one of the binary classifier which provides the classification results in terms of y 1. Then the decision making process has been done by as follows, f x sign w x+b) ………………………………………………………………………………. (8) In the eqn (8), x is the training set. w x+b is the hyper plane which separates the input features with effective manner.
W is the weight value. Then the relation between the hyper planes are defined as follows, ‖w‖ is the normalized w , which helps to optimize the classification process.

PERFORMANCE EVALUATION
The Proposed work utilize the Extended Cohn-Kanade Dataset (CK+) and Binghamton University 3D Facial Expression Database(BU-4DFE) images for recognizing the six basic emotions with one neutral emotion. According to the above discussions, the constructed confusion matrix of both classifier on CK+ database is shown in following table 1 . According to the above table,the dataset consists of 593 sequences in which it has been captured from 123 subjects. The data base includes, 18 contempt, 59 disgust, 28 sadness, 83 surprise image, 69 happiness images and 45 anger images. With the help of the images, 68 facial points are detected from the image and the confusion matrix has been generated according to the relation between the facial points. Then the classifier ensures high accuracy for different classifiers which is shown in figure 5.