Schoneveld, Liam, Othmani, Alice.
2021.
Towards a General Deep Feature Extractor for Facial Expression Recognition. 2021 IEEE International Conference on Image Processing (ICIP). :2339—2342.
The human face conveys a significant amount of information. Through facial expressions, the face is able to communicate numerous sentiments without the need for verbalisation. Visual emotion recognition has been extensively studied. Recently several end-to-end trained deep neural networks have been proposed for this task. However, such models often lack generalisation ability across datasets. In this paper, we propose the Deep Facial Expression Vector ExtractoR (DeepFEVER), a new deep learning-based approach that learns a visual feature extractor general enough to be applied to any other facial emotion recognition task or dataset. DeepFEVER outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets. DeepFEVER’s extracted features also generalise extremely well to other datasets – even those unseen during training – namely, the Real-World Affective Faces (RAF) dataset.
Bae, Jin Hee, Kim, Minwoo, Lim, Joon S..
2021.
Emotion Detection and Analysis from Facial Image using Distance between Coordinates Feature. 2021 International Conference on Information and Communication Technology Convergence (ICTC). :494—497.
Facial expression recognition has long been established as a subject of continuous research in various fields. In this study, feature extraction was conducted by calculating the distance between facial landmarks in an image. The extracted features of the relationship between each landmark and analysis were used to classify five facial expressions. We increased the data and label reliability based on our labeling work with multiple observers. Additionally, faces were recognized from the original data, and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that were relatively more helpful for classification. We performed facial recognition classification and analysis using the method proposed in this study, which showed the validity and effectiveness of the proposed method.
Liu, Weida, Fang, Jian.
2021.
Facial Expression Recognition Method Based on Cascade Convolution Neural Network. 2021 International Wireless Communications and Mobile Computing (IWCMC). :1012—1015.
In view of the problem that the convolution neural network research of facial expression recognition ignores the internal relevance of the key links, which leads to the low accuracy and speed of facial expression recognition, and can't meet the recognition requirements, a series cascade algorithm model for expression recognition of educational robot is constructed and enables the educational robot to recognize multiple students' facial expressions simultaneously, quickly and accurately in the process of movement, in the balance of the accuracy, rapidity and stability of the algorithm, based on the cascade convolution neural network model. Through the CK+ and Oulu-CASIA expression recognition database, the expression recognition experiments of this algorithm are compared with the commonly used STM-ExpLet and FN2EN cascade network algorithms. The results show that the accuracy of the expression recognition method is more than 90%. Compared with the other two commonly used cascade convolution neural network methods, the accuracy of expression recognition is significantly improved.
Wang, Caixia, Wang, Zhihui, Cui, Dong.
2021.
Facial Expression Recognition with Attention Mechanism. 2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). :1—6.
With the development of artificial intelligence, facial expression recognition (FER) has greatly improved performance in deep learning, but there is still a lot of room for improvement in the study of combining attention to focus the network on key parts of the face. For facial expression recognition, this paper designs a network model, which use spatial transformer network to transform the input image firstly, and then adding channel attention and spatial attention to the convolutional network. In addition, in this paper, the GELU activation function is used in the convolutional network, which improves the recognition rate of facial expressions to a certain extent.
Fallah, Zahra, Ebrahimpour-Komleh, Hossein, Mousavirad, Seyed Jalaleddin.
2021.
A Novel Hybrid Pyramid Texture-Based Facial Expression Recognition. 2021 5th International Conference on Pattern Recognition and Image Analysis (IPRIA). :1—6.
Automated analysis of facial expressions is one of the most interesting and challenging problems in many areas such as human-computer interaction. Facial images are affected by many factors, such as intensity, pose and facial expressions. These factors make facial expression recognition problem a challenge. The aim of this paper is to propose a new method based on the pyramid local binary pattern (PLBP) and the pyramid local phase quantization (PLPQ), which are the extension of the local binary pattern (LBP) and the local phase quantization (LPQ) as two methods for extracting texture features. LBP operator is used to extract LBP feature in the spatial domain and LPQ operator is used to extract LPQ feature in the frequency domain. The combination of features in spatial and frequency domains can provide important information in both domains. In this paper, PLBP and PLPQ operators are separately used to extract features. Then, these features are combined to create a new feature vector. The advantage of pyramid transform domain is that it can recognize facial expressions efficiently and with high accuracy even for very low-resolution facial images. The proposed method is verified on the CK+ facial expression database. The proposed method achieves the recognition rate of 99.85% on CK+ database.
Arabian, H., Wagner-Hartl, V., Geoffrey Chase, J., Möller, K..
2021.
Facial Emotion Recognition Focused on Descriptive Region Segmentation. 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). :3415—3418.
Facial emotion recognition (FER) is useful in many different applications and could offer significant benefit as part of feedback systems to train children with Autism Spectrum Disorder (ASD) who struggle to recognize facial expressions and emotions. This project explores the potential of real time FER based on the use of local regions of interest combined with a machine learning approach. Histogram of Oriented Gradients (HOG) was implemented for feature extraction, along with 3 different classifiers, 2 based on k-Nearest Neighbor and 1 using Support Vector Machine (SVM) classification. Model performance was compared using accuracy of randomly selected validation sets after training on random training sets of the Oulu-CASIA database. Image classes were distributed evenly, and accuracies of up to 98.44% were observed with small variation depending on data distributions. The region selection methodology provided a compromise between accuracy and number of extracted features, and validated the hypothesis a focus on smaller informative regions performs just as well as the entire image.
Hu, Zhibin, Yan, Chunman.
2021.
Lightweight Multi-Scale Network with Attention for Facial Expression Recognition. 2021 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE). :695—698.
Aiming at the problems of the traditional convolutional neural network (CNN), such as too many parameters, single scale feature and inefficiency by some useless features, a lightweight multi-scale network with attention is proposed for facial expression recognition. The network uses the lightweight convolutional neural network model Xception and combines with the convolutional block attention module (CBAM) to learn key facial features; In addition, depthwise separable convolution module with convolution kernel of 3 × 3, 5 × 5 and 7 × 7 are used to extract features of facial expression image, and the features are fused to expand the receptive field and obtain more rich facial feature information. Experiments on facial expression datasets Fer2013 and KDEF show that the expression recognition accuracy is improved by 2.14% and 2.18% than the original Xception model, and the results further verify the effectiveness of our methods.
Sun, Lanxin, Dai, JunBo, Shen, Xunbing.
2021.
Facial emotion recognition based on LDA and Facial Landmark Detection. 2021 2nd International Conference on Artificial Intelligence and Education (ICAIE). :64—67.
Emotion recognition in the field of human-computer interaction refers to that the computer has the corresponding perceptual ability to predict the emotional state of human beings in advance by observing human expressions, behaviors and emotions, so as to ensure that computers can communicate emotionally with humans. The main research work of this paper is to extract facial image features by using Linear Discriminant Analysis (LDA) and Facial Landmark Detection after grayscale processing and cropping, and then compare the accuracy after emotion recognition and classification to determine which feature extraction method is more effective. The test results show that the accuracy rate of emotion recognition in face images can reach 73.9% by using LDA method, and 84.5% by using Facial Landmark Detection method. Therefore, facial landmarks can be used to identify emotion in face images more accurately.
Cao, HongYuan, Qi, Chao.
2021.
Facial Expression Study Based on 3D Facial Emotion Recognition. 2021 20th International Conference on Ubiquitous Computing and Communications (IUCC/CIT/DSCI/SmartCNS). :375—381.
Teaching evaluation is an indispensable key link in the modern education model. Its purpose is to promote learners' cognitive and non-cognitive development, especially emotional development. However, today's education has increasingly neglected the emotional process of learners' learning. Therefore, a method of using machines to analyze the emotional changes of learners during learning has been proposed. At present, most of the existing emotion recognition algorithms use the extraction of two-dimensional facial features from images to perform emotion prediction. Through research, it is found that the recognition rate of 2D facial feature extraction is not optimal, so this paper proposes an effective the algorithm obtains a single two-dimensional image from the input end and constructs a three-dimensional face model from the output end, thereby using 3D facial information to estimate the continuous emotion of the dimensional space and applying this method to an online learning system. Experimental results show that the algorithm has strong robustness and recognition ability.
Siyaka, Hassan Opotu, Owolabi, Olumide, Bisallah, I. Hashim.
2021.
A New Facial Image Deviation Estimation and Image Selection Algorithm (Fide-Isa) for Facial Image Recognition Systems: The Mathematical Models. 2021 1st International Conference on Multidisciplinary Engineering and Applied Science (ICMEAS). :1—7.
Deep learning models have been successful and shown to perform better in terms of accuracy and efficiency for facial recognition applications. However, they require huge amount of data samples that were well annotated to be successful. Their data requirements have led to some complications which include increased processing demands of the systems where such systems were to be deployed. Reducing the training sample sizes of deep learning models is still an open problem. This paper proposes the reduction of the number of samples required by the convolutional neutral network used in training a facial recognition system using a new Facial Image Deviation Estimation and Image Selection Algorithm (FIDE-ISA). The algorithm was used to select appropriate facial image training samples incrementally based on their facial deviation. This will reduce the need for huge dataset in training deep learning models. Preliminary results indicated a 100% accuracy for models trained with 54 images (at least 3 images per individual) and above.