Visible to the public Biblio

Filters: Keyword is emotion recognition  [Clear All Filters]
2023-07-21
R, Sowmiya, G, Sivakamasundari, V, Archana.  2022.  Facial Emotion Recognition using Deep Learning Approach. 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS). :1064—1069.
Human facial emotion recognition pays a variety of applications in society. The basic idea of Facial Emotion Recognition is to map the different facial emotions to a variety of emotional states. Conventional Facial Emotion Recognition consists of two processes: extracting the features and feature selection. Nowadays, in deep learning algorithms, Convolutional Neural Networks are primarily used in Facial Emotion Recognition because of their hidden feature extraction from the images. Usually, the standard Convolutional Neural Network has simple learning algorithms with finite feature extraction layers for extracting information. The drawback of the earlier approach was that they validated only the frontal view of the photos even though the image was obtained from different angles. This research work uses a deep Convolutional Neural Network along with a DenseNet-169 as a backbone network for recognizing facial emotions. The emotion Recognition dataset was used to recognize the emotions with an accuracy of 96%.
Giri, Sarwesh, Singh, Gurchetan, Kumar, Babul, Singh, Mehakpreet, Vashisht, Deepanker, Sharma, Sonu, Jain, Prince.  2022.  Emotion Detection with Facial Feature Recognition Using CNN & OpenCV. 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE). :230—232.
Emotion Detection through Facial feature recognition is an active domain of research in the field of human-computer interaction (HCI). Humans are able to share multiple emotions and feelings through their facial gestures and body language. In this project, in order to detect the live emotions from the human facial gesture, we will be using an algorithm that allows the computer to automatically detect the facial recognition of human emotions with the help of Convolution Neural Network (CNN) and OpenCV. Ultimately, Emotion Detection is an integration of obtained information from multiple patterns. If computers will be able to understand more of human emotions, then it will mutually reduce the gap between humans and computers. In this research paper, we will demonstrate an effective way to detect emotions like neutral, happy, sad, surprise, angry, fear, and disgust from the frontal facial expression of the human in front of the live webcam.
Udeh, Chinonso Paschal, Chen, Luefeng, Du, Sheng, Li, Min, Wu, Min.  2022.  A Co-regularization Facial Emotion Recognition Based on Multi-Task Facial Action Unit Recognition. 2022 41st Chinese Control Conference (CCC). :6806—6810.
Facial emotion recognition helps feed the growth of the future artificial intelligence with the development of emotion recognition, learning, and analysis of different angles of a human face and head pose. The world's recent pandemic gave rise to the rapid installment of facial recognition for fewer applications, while emotion recognition is still within the experimental boundaries. The current challenges encountered with facial emotion recognition (FER) are the difference between background noises. Since today's world shows us that humans soon need robotics in the most significant role of human perception, attention, memory, decision-making, and human-robot interaction (HRI) needs employees. By merging the head pose as a combination towards the FER to boost the robustness in understanding emotions using the convolutional neural networks (CNN). The stochastic gradient descent with a comprehensive model is adopted by applying multi-task learning capable of implicit parallelism, inherent and better global optimizer in finding better network weights. After executing a multi-task learning model using two independent datasets, the experiment with the FER and head pose learning multi-views co-regularization frameworks were subsequently merged with validation accuracy.
Shiomi, Takanori, Nomiya, Hiroki, Hochin, Teruhisa.  2022.  Facial Expression Intensity Estimation Considering Change Characteristic of Facial Feature Values for Each Facial Expression. 2022 23rd ACIS International Summer Virtual Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Summer). :15—21.
Facial expression intensity, which quantifies the degree of facial expression, has been proposed. It is calculated based on how much facial feature values change compared to an expressionless face. The estimation has two aspects. One is to classify facial expressions, and the other is to estimate their intensity. However, it is difficult to do them at the same time. There- fore, in this work, the estimation of intensity and the classification of expression are separated. We suggest an explicit method and an implicit method. In the explicit one, a classifier determines which types of expression the inputs are, and each regressor determines its intensity. On the other hand, in the implicit one, we give zero values or non-zero values to regressors for each type of facial expression as ground truth, depending on whether or not an input image is the correct facial expression. We evaluated the two methods and, as a result, found that they are effective for facial expression recognition.
Lee, Gwo-Chuan, Li, Zi-Yang, Li, Tsai-Wei.  2022.  Ensemble Algorithm of Convolution Neural Networks for Enhancing Facial Expression Recognition. 2022 IEEE 5th International Conference on Knowledge Innovation and Invention (ICKII ). :111—115.
Artificial intelligence (AI) cooperates with multiple industries to improve the overall industry framework. Especially, human emotion recognition plays an indispensable role in supporting medical care, psychological counseling, crime prevention and detection, and crime investigation. The research on emotion recognition includes emotion-specific intonation patterns, literal expressions of emotions, and facial expressions. Recently, the deep learning model of facial emotion recognition aims to capture tiny changes in facial muscles to provide greater recognition accuracy. Hybrid models in facial expression recognition have been constantly proposed to improve the performance of deep learning models in these years. In this study, we proposed an ensemble learning algorithm for the accuracy of the facial emotion recognition model with three deep learning models: VGG16, InceptionResNetV2, and EfficientNetB0. To enhance the performance of these benchmark models, we applied transfer learning, fine-tuning, and data augmentation to implement the training and validation of the Facial Expression Recognition 2013 (FER-2013) Dataset. The developed algorithm finds the best-predicted value by prioritizing the InceptionResNetV2. The experimental results show that the proposed ensemble learning algorithm of priorities edges up 2.81% accuracy of the model identification. The future extension of this study ventures into the Internet of Things (IoT), medical care, and crime detection and prevention.
Churaev, Egor, Savchenko, Andrey V..  2022.  Multi-user facial emotion recognition in video based on user-dependent neural network adaptation. 2022 VIII International Conference on Information Technology and Nanotechnology (ITNT). :1—5.
In this paper, the multi-user video-based facial emotion recognition is examined in the presence of a small data set with the emotions of end users. By using the idea of speaker-dependent speech recognition, we propose a novel approach to solve this task if labeled video data from end users is available. During the training stage, a deep convolutional neural network is trained for user-independent emotion classification. Next, this classifier is adapted (fine-tuned) on the emotional video of a concrete person. During the recognition stage, the user is identified based on face recognition techniques, and an emotional model of the recognized user is applied. It is experimentally shown that this approach improves the accuracy of emotion recognition by more than 20% for the RAVDESS dataset.
Avula, Himaja, R, Ranjith, S Pillai, Anju.  2022.  CNN based Recognition of Emotion and Speech from Gestures and Facial Expressions. 2022 6th International Conference on Electronics, Communication and Aerospace Technology. :1360—1365.
The major mode of communication between hearing-impaired or mute people and others is sign language. Prior, most of the recognition systems for sign language had been set simply to recognize hand signs and convey them as text. However, the proposed model tries to provide speech to the mute. Firstly, hand gestures for sign language recognition and facial emotions are trained using CNN (Convolutional Neural Network) and then by training the emotion to speech model. Finally combining hand gestures and facial emotions to realize the emotion and speech.
Abbasi, Nida Itrat, Song, Siyang, Gunes, Hatice.  2022.  Statistical, Spectral and Graph Representations for Video-Based Facial Expression Recognition in Children. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :1725—1729.
Child facial expression recognition is a relatively less investigated area within affective computing. Children’s facial expressions differ significantly from adults; thus, it is necessary to develop emotion recognition frameworks that are more objective, descriptive and specific to this target user group. In this paper we propose the first approach that (i) constructs video-level heterogeneous graph representation for facial expression recognition in children, and (ii) predicts children’s facial expressions using the automatically detected Action Units (AUs). To this aim, we construct three separate length-independent representations, namely, statistical, spectral and graph at video-level for detailed multi-level facial behaviour decoding (AU activation status, AU temporal dynamics and spatio-temporal AU activation patterns, respectively). Our experimental results on the LIRIS Children Spontaneous Facial Expression Video Database demonstrate that combining these three feature representations provides the highest accuracy for expression recognition in children.
2023-02-17
Maehigashi, Akihiro.  2022.  The Nature of Trust in Communication Robots: Through Comparison with Trusts in Other People and AI systems. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :900–903.
In this study, the nature of human trust in communication robots was experimentally investigated comparing with trusts in other people and artificial intelligence (AI) systems. The results of the experiment showed that trust in robots is basically similar to that in AI systems in a calculation task where a single solution can be obtained and is partly similar to that in other people in an emotion recognition task where multiple interpretations can be acceptable. This study will contribute to designing a smooth interaction between people and communication robots.
2023-01-05
Omman, Bini, Eldho, Shallet Mary T.  2022.  Speech Emotion Recognition Using Bagged Support Vector Machines. 2022 International Conference on Computing, Communication, Security and Intelligent Systems (IC3SIS). :1—4.
Speech emotion popularity is one of the quite promising and thrilling issues in the area of human computer interaction. It has been studied and analysed over several decades. It’s miles the technique of classifying or identifying emotions embedded inside the speech signal.Current challenges related to the speech emotion recognition when a single estimator is used is difficult to build and train using HMM and neural networks,Low detection accuracy,High computational power and time.In this work we executed emotion category on corpora — the berlin emodb, and the ryerson audio-visible database of emotional speech and track (Ravdess). A mixture of spectral capabilities was extracted from them which changed into further processed and reduced to the specified function set. When compared to single estimators, ensemble learning has been shown to provide superior overall performance. We endorse a bagged ensemble model which consist of support vector machines with a gaussian kernel as a possible set of rules for the hassle handy. Inside the paper, ensemble studying algorithms constitute a dominant and state-of-the-art approach for acquiring maximum overall performance.
2022-07-05
Schoneveld, Liam, Othmani, Alice.  2021.  Towards a General Deep Feature Extractor for Facial Expression Recognition. 2021 IEEE International Conference on Image Processing (ICIP). :2339—2342.
The human face conveys a significant amount of information. Through facial expressions, the face is able to communicate numerous sentiments without the need for verbalisation. Visual emotion recognition has been extensively studied. Recently several end-to-end trained deep neural networks have been proposed for this task. However, such models often lack generalisation ability across datasets. In this paper, we propose the Deep Facial Expression Vector ExtractoR (DeepFEVER), a new deep learning-based approach that learns a visual feature extractor general enough to be applied to any other facial emotion recognition task or dataset. DeepFEVER outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets. DeepFEVER’s extracted features also generalise extremely well to other datasets – even those unseen during training – namely, the Real-World Affective Faces (RAF) dataset.
Bae, Jin Hee, Kim, Minwoo, Lim, Joon S..  2021.  Emotion Detection and Analysis from Facial Image using Distance between Coordinates Feature. 2021 International Conference on Information and Communication Technology Convergence (ICTC). :494—497.
Facial expression recognition has long been established as a subject of continuous research in various fields. In this study, feature extraction was conducted by calculating the distance between facial landmarks in an image. The extracted features of the relationship between each landmark and analysis were used to classify five facial expressions. We increased the data and label reliability based on our labeling work with multiple observers. Additionally, faces were recognized from the original data, and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that were relatively more helpful for classification. We performed facial recognition classification and analysis using the method proposed in this study, which showed the validity and effectiveness of the proposed method.
Arabian, H., Wagner-Hartl, V., Geoffrey Chase, J., Möller, K..  2021.  Facial Emotion Recognition Focused on Descriptive Region Segmentation. 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). :3415—3418.
Facial emotion recognition (FER) is useful in many different applications and could offer significant benefit as part of feedback systems to train children with Autism Spectrum Disorder (ASD) who struggle to recognize facial expressions and emotions. This project explores the potential of real time FER based on the use of local regions of interest combined with a machine learning approach. Histogram of Oriented Gradients (HOG) was implemented for feature extraction, along with 3 different classifiers, 2 based on k-Nearest Neighbor and 1 using Support Vector Machine (SVM) classification. Model performance was compared using accuracy of randomly selected validation sets after training on random training sets of the Oulu-CASIA database. Image classes were distributed evenly, and accuracies of up to 98.44% were observed with small variation depending on data distributions. The region selection methodology provided a compromise between accuracy and number of extracted features, and validated the hypothesis a focus on smaller informative regions performs just as well as the entire image.
Sun, Lanxin, Dai, JunBo, Shen, Xunbing.  2021.  Facial emotion recognition based on LDA and Facial Landmark Detection. 2021 2nd International Conference on Artificial Intelligence and Education (ICAIE). :64—67.
Emotion recognition in the field of human-computer interaction refers to that the computer has the corresponding perceptual ability to predict the emotional state of human beings in advance by observing human expressions, behaviors and emotions, so as to ensure that computers can communicate emotionally with humans. The main research work of this paper is to extract facial image features by using Linear Discriminant Analysis (LDA) and Facial Landmark Detection after grayscale processing and cropping, and then compare the accuracy after emotion recognition and classification to determine which feature extraction method is more effective. The test results show that the accuracy rate of emotion recognition in face images can reach 73.9% by using LDA method, and 84.5% by using Facial Landmark Detection method. Therefore, facial landmarks can be used to identify emotion in face images more accurately.
Cao, HongYuan, Qi, Chao.  2021.  Facial Expression Study Based on 3D Facial Emotion Recognition. 2021 20th International Conference on Ubiquitous Computing and Communications (IUCC/CIT/DSCI/SmartCNS). :375—381.
Teaching evaluation is an indispensable key link in the modern education model. Its purpose is to promote learners' cognitive and non-cognitive development, especially emotional development. However, today's education has increasingly neglected the emotional process of learners' learning. Therefore, a method of using machines to analyze the emotional changes of learners during learning has been proposed. At present, most of the existing emotion recognition algorithms use the extraction of two-dimensional facial features from images to perform emotion prediction. Through research, it is found that the recognition rate of 2D facial feature extraction is not optimal, so this paper proposes an effective the algorithm obtains a single two-dimensional image from the input end and constructs a three-dimensional face model from the output end, thereby using 3D facial information to estimate the continuous emotion of the dimensional space and applying this method to an online learning system. Experimental results show that the algorithm has strong robustness and recognition ability.
2021-03-29
Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

Singh, S., Nasoz, F..  2020.  Facial Expression Recognition with Convolutional Neural Networks. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0324—0328.

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

Pranav, E., Kamal, S., Chandran, C. Satheesh, Supriya, M. H..  2020.  Facial Emotion Recognition Using Deep Convolutional Neural Network. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). :317—320.

The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.

John, A., MC, A., Ajayan, A. S., Sanoop, S., Kumar, V. R..  2020.  Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :1328—1333.

Human emotion recognition plays a vital role in interpersonal communication and human-machine interaction domain. Emotions are expressed through speech, hand gestures and by the movements of other body parts and through facial expression. Facial emotions are one of the most important factors in human communication that help us to understand, what the other person is trying to communicate. People understand only one-third of the message verbally, and two-third of it is through non-verbal means. There are many face emotion recognition (FER) systems present right now, but in real-life scenarios, they do not perform efficiently. Though there are many which claim to be a near-perfect system and to achieve the results in favourable and optimal conditions. The wide variety of expressions shown by people and the diversity in facial features of different people will not aid in the process of coming up with a system that is definite in nature. Hence developing a reliable system without any flaws showed by the existing systems is a challenging task. This paper aims to build an enhanced system that can analyse the exact facial expression of a user at that particular time and generate the corresponding emotion. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing methods like facial landmark and HOG were incorporated into a convolutional neural network (CNN), and this has achieved good accuracy when compared with the already existing models.

Oğuz, K., Korkmaz, İ, Korkmaz, B., Akkaya, G., Alıcı, C., Kılıç, E..  2020.  Effect of Age and Gender on Facial Emotion Recognition. 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). :1—6.

New research fields and applications on human computer interaction will emerge based on the recognition of emotions on faces. With such aim, our study evaluates the features extracted from faces to recognize emotions. To increase the success rate of these features, we have run several tests to demonstrate how age and gender affect the results. The artificial neural networks were trained by the apparent regions on the face such as eyes, eyebrows, nose, mouth, and jawline and then the networks are tested with different age and gender groups. According to the results, faces of older people have a lower performance rate of emotion recognition. Then, age and gender based groups are created manually, and we show that performance rates of facial emotion recognition have increased for the networks that are trained using these particular groups.

Ozdemir, M. A., Elagoz, B., Soy, A. Alaybeyoglu, Akan, A..  2020.  Deep Learning Based Facial Emotion Recognition System. 2020 Medical Technologies Congress (TIPTEKNO). :1—4.

In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.

Jia, C., Li, C. L., Ying, Z..  2020.  Facial expression recognition based on the ensemble learning of CNNs. 2020 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). :1—5.

As a part of body language, facial expression is a psychological state that reflects the current emotional state of the person. Recognition of facial expressions can help to understand others and enhance communication with others. We propose a facial expression recognition method based on convolutional neural network ensemble learning in this paper. Our model is composed of three sub-networks, and uses the SVM classifier to Integrate the output of the three networks to get the final result. The recognition accuracy of the model's expression on the FER2013 dataset reached 71.27%. The results show that the method has high test accuracy and short prediction time, and can realize real-time, high-performance facial recognition.

Xu, X., Ruan, Z., Yang, L..  2020.  Facial Expression Recognition Based on Graph Neural Network. 2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC). :211—214.

Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.

2021-03-01
Hynes, E., Flynn, R., Lee, B., Murray, N..  2020.  An Evaluation of Lower Facial Micro Expressions as an Implicit QoE Metric for an Augmented Reality Procedure Assistance Application. 2020 31st Irish Signals and Systems Conference (ISSC). :1–6.
Augmented reality (AR) has been identified as a key technology to enhance worker utility in the context of increasing automation of repeatable procedures. AR can achieve this by assisting the user in performing complex and frequently changing procedures. Crucial to the success of procedure assistance AR applications is user acceptability, which can be measured by user quality of experience (QoE). An active research topic in QoE is the identification of implicit metrics that can be used to continuously infer user QoE during a multimedia experience. A user's QoE is linked to their affective state. Affective state is reflected in facial expressions. Emotions shown in micro facial expressions resemble those expressed in normal expressions but are distinguished from them by their brief duration. The novelty of this work lies in the evaluation of micro facial expressions as a continuous QoE metric by means of correlation analysis to the more traditional and accepted post-experience self-reporting. In this work, an optimal Rubik's Cube solver AR application was used as a proof of concept for complex procedure assistance. This was compared with a paper-based procedure assistance control. QoE expressed by affect in normal and micro facial expressions was evaluated through correlation analysis with post-experience reports. The results show that the AR application yielded higher task success rates and shorter task durations. Micro facial expressions reflecting disgust correlated moderately to the questionnaire responses for instruction disinterest in the AR application.
2020-11-02
Zhong, J., Yang, C..  2019.  A Compositionality Assembled Model for Learning and Recognizing Emotion from Bodily Expression. 2019 IEEE 4th International Conference on Advanced Robotics and Mechatronics (ICARM). :821–826.
When we are express our internal status, such as emotions, the human body expression we use follows the compositionality principle. It is a theory in linguistic which proposes that the single components of the bodily presentation as well as the rules used to combine them are the major parts to finish this process. In this paper, such principle is applied to the process of expressing and recognizing emotional states through body expression, in which certain key features can be learned to represent certain primitives of the internal emotional state in the form of basic variables. This is done by a hierarchical recurrent neural learning framework (RNN) because of its nonlinear dynamic bifurcation, so that variables can be learned to represent different hierarchies. In addition, we applied some adaptive learning techniques in machine learning for the requirement of real-time emotion recognition, in which a stable representation can be maintained compared to previous work. The model is examined by comparing the PB values between the training and recognition phases. This hierarchical model shows the rationality of the compositionality hypothesis by the RNN learning and explains how key features can be used and combined in bodily expression to show the emotional state.