Visible to the public Biblio

Filters: Keyword is emotion recognition  [Clear All Filters]
2020-06-19
Ly, Son Thai, Do, Nhu-Tai, Lee, Guee-Sang, Kim, Soo-Hyung, Yang, Hyung-Jeong.  2019.  A 3d Face Modeling Approach for in-The-Wild Facial Expression Recognition on Image Datasets. 2019 IEEE International Conference on Image Processing (ICIP). :3492—3496.

This paper explores the benefits of 3D face modeling for in-the-wild facial expression recognition (FER). Since there is limited in-the-wild 3D FER dataset, we first construct 3D facial data from available 2D dataset using recent advances in 3D face reconstruction. The 3D facial geometry representation is then extracted by deep learning technique. In addition, we also take advantage of manipulating the 3D face, such as using 2D projected images of 3D face as additional input for FER. These features are then fused with that of 2D FER typical network. By doing so, despite using common approaches, we achieve a competent recognition accuracy on Real-World Affective Faces (RAF) database and Static Facial Expressions in the Wild (SFEW 2.0) compared with the state-of-the-art reports. To the best of our knowledge, this is the first time such a deep learning combination of 3D and 2D facial modalities is presented in the context of in-the-wild FER.

Saboor khan, Abdul, Shafi, Imran, Anas, Muhammad, Yousuf, Bilal M, Abbas, Muhammad Jamshed, Noor, Aqib.  2019.  Facial Expression Recognition using Discrete Cosine Transform Artificial Neural Network. 2019 22nd International Multitopic Conference (INMIC). :1—5.

Every so often Humans utilize non-verbal gestures (e.g. facial expressions) to express certain information or emotions. Moreover, countless face gestures are expressed throughout the day because of the capabilities possessed by humans. However, the channels of these expression/emotions can be through activities, postures, behaviors & facial expressions. Extensive research unveiled that there exists a strong relationship between the channels and emotions which has to be further investigated. An Automatic Facial Expression Recognition (AFER) framework has been proposed in this work that can predict or anticipate seven universal expressions. In order to evaluate the proposed approach, Frontal face Image Database also named as Japanese Female Facial Expression (JAFFE) is opted as input. This database is further processed with a frequency domain technique known as Discrete Cosine transform (DCT) and then classified using Artificial Neural Networks (ANN). So as to check the robustness of this novel strategy, the random trial of K-fold cross validation, leave one out and person independent methods is repeated many times to provide an overview of recognition rates. The experimental results demonstrate a promising performance of this application.

Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Facial Expression Recognition Using Merged Convolution Neural Network. 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE). :296—298.

In this paper, a merged convolution neural network (MCNN) is proposed to improve the accuracy and robustness of real-time facial expression recognition (FER). Although there are many ways to improve the performance of facial expression recognition, a revamp of the training framework and image preprocessing renders better results in applications. When the camera is capturing images at high speed, however, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of human facial expression. To solve this problem, we propose a statistical method for recognition results obtained from previous images, instead of using the current recognition output. Experimental results show that the proposed method can satisfactorily recognize seven basic facial expressions in real time.

Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.

Chen, Yuedong, Wang, Jianfeng, Chen, Shikai, Shi, Zhongchao, Cai, Jianfei.  2019.  Facial Motion Prior Networks for Facial Expression Recognition. 2019 IEEE Visual Communications and Image Processing (VCIP). :1—4.

Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches.

Yang, Jiannan, Zhang, Fan, Chen, Bike, Khan, Samee U..  2019.  Facial Expression Recognition Based on Facial Action Unit. 2019 Tenth International Green and Sustainable Computing Conference (IGSC). :1—6.

In the past few years, there has been increasing interest in the perception of human expressions and mental states by machines, and Facial Expression Recognition (FER) has attracted increasing attention. Facial Action Unit (AU) is an early proposed method to describe facial muscle movements, which can effectively reflect the changes in people's facial expressions. In this paper, we propose a high-performance facial expression recognition method based on facial action unit, which can run on low-configuration computer and realize video and real-time camera FER. Our method is mainly divided into two parts. In the first part, 68 facial landmarks and image Histograms of Oriented Gradients (HOG) are obtained, and the feature values of action units are calculated accordingly. The second part uses three classification methods to realize the mapping from AUs to FER. We have conducted many experiments on the popular human FER benchmark datasets (CK+ and Oulu CASIA) to demonstrate the effectiveness of our method.

2019-12-30
Toliupa, Serhiy, Tereikovskiy, Ihor, Dychka, Ivan, Tereikovska, Liudmyla, Trush, Alexander.  2019.  The Method of Using Production Rules in Neural Network Recognition of Emotions by Facial Geometry. 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT). :323–327.
The article is devoted to the improvement of neural network means of recognition of emotions on human geometry, which are defined for use in information systems of general purpose. It is shown that modern means of emotional recognition are based on the usual networks of critical disadvantage, because there is a lack of accuracy of recognition under the influence of purchased, characteristic of general-purpose information systems. It is determined that the above remarks relate to the turning of the face and the size of the image. A typical approach to overcoming this disadvantage through training is unacceptable for all protection options that are inappropriate for reasons of duration and compilation of the required training sample. It is proposed to increase the accuracy of recognition by submitting an expert data model to the neural network. An appropriate method for representing expert knowledge is developed. A feature of the method is the use of productive rules and the PNN neural network. Experimental verification of the developed solutions has been carried out. The obtained results allow to increase the efficiency of the termination and disclosure of the set of age networks, the characteristics of which are not presented in the registered statistical data.
Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Real-Time Facial Expression Recognition Based on CNN. 2019 International Conference on System Science and Engineering (ICSSE). :120–123.
In this paper, we propose a method for improving the robustness of real-time facial expression recognition. Although there are many ways to improve the accuracy of facial expression recognition, a revamp of the training framework and image preprocessing allow better results in applications. One existing problem is that when the camera is capturing images in high speed, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of the human facial expression. To solve this problem for smooth system operation and maintenance of recognition speed, we take changes in image characteristics at high speed capturing into account. The proposed method does not use the immediate output for reference, but refers to the previous image for averaging to facilitate recognition. In this way, we are able to reduce interference by the characteristics of the images. The experimental results show that after adopting this method, overall robustness and accuracy of facial expression recognition have been greatly improved compared to those obtained by only the convolution neural network (CNN).
Taha, Bilal, Hatzinakos, Dimitrios.  2019.  Emotion Recognition from 2D Facial Expressions. 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE). :1–4.
This work proposes an approach to find and learn informative representations from 2 dimensional gray-level images for facial expression recognition application. The learned features are obtained from a designed convolutional neural network (CNN). The developed CNN enables us to learn features from the images in a highly efficient manner by cascading different layers together. The developed model is computationally efficient since it does not consist of a huge number of layers and at the same time it takes into consideration the overfitting problem. The outcomes from the developed CNN are compared to handcrafted features that span texture and shape features. The experiments conducted on the Bosphours database show that the developed CNN model outperforms the handcrafted features when coupled with a Support Vector Machines (SVM) classifier.
Lian, Zheng, Li, Ya, Tao, Jianhua, Huang, Jian, Niu, Mingyue.  2018.  Region Based Robust Facial Expression Analysis. 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). :1–5.
Facial emotion recognition is an essential aspect in human-machine interaction. In the real-world conditions, it faces many challenges, i.e., illumination changes, large pose variations and partial or full occlusions, which cause different facial areas with different sharpness and completeness. Inspired by this fact, we focus on facial expression recognition based on partial faces in this paper. We compare contribution of seven facial areas of low-resolution images, including nose areas, mouse areas, eyes areas, nose to mouse areas, nose to eyes areas, mouth to eyes areas and the whole face areas. Through analysis on the confusion matrix and the class activation map, we find that mouth regions contain much emotional information compared with nose areas and eyes areas. In the meantime, considering larger facial areas is helpful to judge the expression more precisely. To sum up, contributions of this paper are two-fold: (1) We reveal concerned areas of human in emotion recognition. (2) We quantify the contribution of different facial parts.
Kim, Sunbin, Kim, Hyeoncheol.  2019.  Deep Explanation Model for Facial Expression Recognition Through Facial Action Coding Unit. 2019 IEEE International Conference on Big Data and Smart Computing (BigComp). :1–4.
Facial expression is the most powerful and natural non-verbal emotional communication method. Facial Expression Recognition(FER) has significance in machine learning tasks. Deep Learning models perform well in FER tasks, but it doesn't provide any justification for its decisions. Based on the hypothesis that facial expression is a combination of facial muscle movements, we find that Facial Action Coding Units(AUs) and Emotion label have a relationship in CK+ Dataset. In this paper, we propose a model which utilises AUs to explain Convolutional Neural Network(CNN) model's classification results. The CNN model is trained with CK+ Dataset and classifies emotion based on extracted features. Explanation model classifies the multiple AUs with the extracted features and emotion classes from the CNN model. Our experiment shows that with only features and emotion classes obtained from the CNN model, Explanation model generates AUs very well.
2019-12-16
DiPaola, Steve, Yalçin, Özge Nilay.  2019.  A multi-layer artificial intelligence and sensing based affective conversational embodied agent. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). :91–92.

Building natural and conversational virtual humans is a task of formidable complexity. We believe that, especially when building agents that affectively interact with biological humans in real-time, a cognitive science-based, multilayered sensing and artificial intelligence (AI) systems approach is needed. For this demo, we show a working version (through human interaction with it) our modular system of natural, conversation 3D virtual human using AI or sensing layers. These including sensing the human user via facial emotion recognition, voice stress, semantic meaning of the words, eye gaze, heart rate, and galvanic skin response. These inputs are combined with AI sensing and recognition of the environment using deep learning natural language captioning or dense captioning. These are all processed by our AI avatar system allowing for an affective and empathetic conversation using an NLP topic-based dialogue capable of using facial expressions, gestures, breath, eye gaze and voice language-based two-way back and forth conversations with a sensed human. Our lab has been building these systems in stages over the years.

2018-12-03
Faria, Diego Resende, Vieira, Mario, Faria, Fernanda C.C..  2017.  Towards the Development of Affective Facial Expression Recognition for Human-Robot Interaction. Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments. :300–304.

Affective facial expression is a key feature of non-verbal behavior and is considered as a symptom of an internal emotional state. Emotion recognition plays an important role in social communication: human-human and also for human-robot interaction. This work aims at the development of a framework able to recognise human emotions through facial expression for human-robot interaction. Simple features based on facial landmarks distances and angles are extracted to feed a dynamic probabilistic classification framework. The public online dataset Karolinska Directed Emotional Faces (KDEF) [12] is used to learn seven different emotions (e.g. angry, fearful, disgusted, happy, sad, surprised, and neutral) performed by seventy subjects. Offline and on-the-fly tests were carried out: leave-one-out cross validation tests using the dataset and on-the-fly tests during human-robot interactions. Preliminary results show that the proposed framework can correctly recognise human facial expressions with potential to be used in human-robot interaction scenarios.

2018-01-23
Khan, S., Ullah, K..  2017.  Smart elevator system for hazard notification. 2017 International Conference on Innovations in Electrical Engineering and Computational Technologies (ICIEECT). :1–4.

In this proposed method, the traditional elevators are upgraded in such a way that any alarming situation in the elevator can be detected and then sent to a main center where further action can be taken accordingly. Different emergency situation can be handled by implementing the system. Smart elevator system works by installing different modules inside the elevator such as speed sensors which will detect speed variations occurring above or below a certain threshold of elevator speed. The smart elevator system installed within the elevator sends a message to the emergency response center and sends an automated call as well. The smart system also includes an emotion detection algorithm which will detect emotions of the individual based on their expression in the elevator. The smart system also has a whisper detection system as well to know if someone stuck inside the elevator is alive during any hazardous situation. A broadcast signal is used as a check in the elevator system to evaluate if every part of the system is in stable state. Proposed system can completely replace the current elevator systems and become part of smart homes.

2015-05-04
Luque, J., Anguera, X..  2014.  On the modeling of natural vocal emotion expressions through binary key. Signal Processing Conference (EUSIPCO), 2014 Proceedings of the 22nd European. :1562-1566.

This work presents a novel method to estimate natural expressed emotions in speech through binary acoustic modeling. Standard acoustic features are mapped to a binary value representation and a support vector regression model is used to correlate them with the three-continuous emotional dimensions. Three different sets of speech features, two based on spectral parameters and one on prosody are compared on the VAM corpus, a set of spontaneous dialogues from a German TV talk-show. The regression analysis, in terms of correlation coefficient and mean absolute error, show that the binary key modeling is able to successfully capture speaker emotion characteristics. The proposed algorithm obtains comparable results to those reported on the literature while it relies on a much smaller set of acoustic descriptors. Furthermore, we also report on preliminary results based on the combination of the binary models, which brings further performance improvements.