Visible to the public Biblio

Filters: Keyword is facial expressions  [Clear All Filters]
2023-07-21
Sadikoğlu, Fahreddin M., Idle Mohamed, Mohamed.  2022.  Facial Expression Recognition Using CNN. 2022 International Conference on Artificial Intelligence in Everything (AIE). :95—99.
Facial is the most dynamic part of the human body that conveys information about emotions. The level of diversity in facial geometry and facial look makes it possible to detect various human expressions. To be able to differentiate among numerous facial expressions of emotion, it is crucial to identify the classes of facial expressions. The methodology used in this article is based on convolutional neural networks (CNN). In this paper Deep Learning CNN is used to examine Alex net architectures. Improvements were achieved by applying the transfer learning approach and modifying the fully connected layer with the Support Vector Machine(SVM) classifier. The system succeeded by achieving satisfactory results on icv-the MEFED dataset. Improved models achieved around 64.29 %of recognition rates for the classification of the selected expressions. The results obtained are acceptable and comparable to the relevant systems in the literature provide ideas a background for further improvements.
2021-03-29
Makovetskii, A., Kober, V., Voronin, A., Zhernov, D..  2020.  Facial recognition and 3D non-rigid registration. 2020 International Conference on Information Technology and Nanotechnology (ITNT). :1—4.

One of the most efficient tool for human face recognition is neural networks. However, the result of recognition can be spoiled by facial expressions and other deviation from the canonical face representation. In this paper, we propose a resampling method of human faces represented by 3D point clouds. The method is based on a non-rigid Iterative Closest Point (ICP) algorithm. To improve the facial recognition performance, we use a combination of the proposed method and convolutional neural network (CNN). Computer simulation results are provided to illustrate the performance of the proposed method.

John, A., MC, A., Ajayan, A. S., Sanoop, S., Kumar, V. R..  2020.  Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :1328—1333.

Human emotion recognition plays a vital role in interpersonal communication and human-machine interaction domain. Emotions are expressed through speech, hand gestures and by the movements of other body parts and through facial expression. Facial emotions are one of the most important factors in human communication that help us to understand, what the other person is trying to communicate. People understand only one-third of the message verbally, and two-third of it is through non-verbal means. There are many face emotion recognition (FER) systems present right now, but in real-life scenarios, they do not perform efficiently. Though there are many which claim to be a near-perfect system and to achieve the results in favourable and optimal conditions. The wide variety of expressions shown by people and the diversity in facial features of different people will not aid in the process of coming up with a system that is definite in nature. Hence developing a reliable system without any flaws showed by the existing systems is a challenging task. This paper aims to build an enhanced system that can analyse the exact facial expression of a user at that particular time and generate the corresponding emotion. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing methods like facial landmark and HOG were incorporated into a convolutional neural network (CNN), and this has achieved good accuracy when compared with the already existing models.

Ozdemir, M. A., Elagoz, B., Soy, A. Alaybeyoglu, Akan, A..  2020.  Deep Learning Based Facial Emotion Recognition System. 2020 Medical Technologies Congress (TIPTEKNO). :1—4.

In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.

2020-07-16
McNeely-White, David G., Ortega, Francisco R., Beveridge, J. Ross, Draper, Bruce A., Bangar, Rahul, Patil, Dhruva, Pustejovsky, James, Krishnaswamy, Nikhil, Rim, Kyeongmin, Ruiz, Jaime et al..  2019.  User-Aware Shared Perception for Embodied Agents. 2019 IEEE International Conference on Humanized Computing and Communication (HCC). :46—51.

We present Diana, an embodied agent who is aware of her own virtual space and the physical space around her. Using video and depth sensors, Diana attends to the user's gestures, body language, gaze and (soon) facial expressions as well as their words. Diana also gestures and emotes in addition to speaking, and exists in a 3D virtual world that the user can see. This produces symmetric and shared perception, in the sense that Diana can see the user, the user can see Diana, and both can see the virtual world. The result is an embodied agent that begins to develop the conceit that the user is interacting with a peer rather than a program.

2020-06-19
Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.