Visible to the public Biblio

Filters: Keyword is facial expression  [Clear All Filters]
2023-07-21
Udeh, Chinonso Paschal, Chen, Luefeng, Du, Sheng, Li, Min, Wu, Min.  2022.  A Co-regularization Facial Emotion Recognition Based on Multi-Task Facial Action Unit Recognition. 2022 41st Chinese Control Conference (CCC). :6806—6810.
Facial emotion recognition helps feed the growth of the future artificial intelligence with the development of emotion recognition, learning, and analysis of different angles of a human face and head pose. The world's recent pandemic gave rise to the rapid installment of facial recognition for fewer applications, while emotion recognition is still within the experimental boundaries. The current challenges encountered with facial emotion recognition (FER) are the difference between background noises. Since today's world shows us that humans soon need robotics in the most significant role of human perception, attention, memory, decision-making, and human-robot interaction (HRI) needs employees. By merging the head pose as a combination towards the FER to boost the robustness in understanding emotions using the convolutional neural networks (CNN). The stochastic gradient descent with a comprehensive model is adopted by applying multi-task learning capable of implicit parallelism, inherent and better global optimizer in finding better network weights. After executing a multi-task learning model using two independent datasets, the experiment with the FER and head pose learning multi-views co-regularization frameworks were subsequently merged with validation accuracy.
2021-03-29
Ozdemir, M. A., Elagoz, B., Soy, A. Alaybeyoglu, Akan, A..  2020.  Deep Learning Based Facial Emotion Recognition System. 2020 Medical Technologies Congress (TIPTEKNO). :1—4.

In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.

2019-12-30
Wang, XuMing, Huang, Jin, Zhu, Jia, Yang, Min, Yang, Fen.  2018.  Facial Expression Recognition with Deep Learning. Proceedings of the 10th International Conference on Internet Multimedia Computing and Service. :10:1–10:4.
Automatic recognition of facial expression images is a challenge for computer due to variation of expression, background, position and label noise. The paper propose a new method for static facial expression recognition. Main process is to perform experiments by FER-2013 dataset, the primary mission is using our CNN model to classify a set of static images into 7 basic emotions and then achieve effective classification automatically. The two preprocessing of the faces picture have enhanced the effect of the picture for recognition. First, FER datasets are preprocessed with standard histogram eqialization. Then we employ ImageDataGenerator to deviate and rotate the facial image to enhance model robustness. Finally, the result of softmax activation function (also known as multinomial logistic regression) is stacked by SVM. The result of softmax activation function + SVM is better than softmax activation function. The accuracy of facial expression recognition achieve 68.79% on the test set.
2018-12-03
Liliana, Dewi Yanti, Basaruddin, Chan, Widyanto, M. Rahmat.  2017.  Mix Emotion Recognition from Facial Expression Using SVM-CRF Sequence Classifier. Proceedings of the International Conference on Algorithms, Computing and Systems. :27–31.

Recently, emotion recognition has gained increasing attention in various applications related to Social Signal Processing (SSP) and human affect. The existing research is mainly focused on six basic emotions (happy, sad, fear, disgust, angry, and surprise). However human expresses many kind of emotions, including mix emotion which has not been explored due to its complexity. We model 12 types of mix emotion recognition from facial expression in a sequence of images using two-stages learning which combines Support Vector Machines (SVM) and Conditional Random Fields (CRF) as sequence classifiers. SVM classifies each image frame and produce emotion label output, subsequently it becomes the input for CRF which yields the mix emotion label of the corresponding observation sequence. We evaluate our proposed model on modified image frames of Cohn Kanade+ dataset, and on our own made mix emotion dataset. We also compare our model with the original CRF model, and our model shows a superior performance result.

2018-07-18
Yin, Delina Beh Mei, Omar, Shariman, Talip, Bazilah A., Muklas, Amalia, Norain, Nur Afiqah Mohd, Othman, Abu Talib.  2017.  Fusion of Face Recognition and Facial Expression Detection for Authentication: A Proposed Model. Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication. :21:1–21:8.

The paper presents a novel model of hybrid biometric-based authentication. Currently, the recognition accuracy of a single biometric verification system is often much reduced due to many factors such as the environment, user mode and physiological defects of an individual. Apparently, the enrolment of static biometric is highly vulnerable to impersonation attack. Due to the fact of single biometric authentication only offers one factor of verification, we proposed to hybrid two biometric attributes that consist of physiological and behavioural trait. In this study, we utilise the static and dynamic features of a human face. In order to extract the important features from a face, the primary steps taken are image pre-processing and face detection. Apparently, to distinguish between a genuine user or an imposter, the first authentication is to verify the user's identity through face recognition. Solely depending on a single modal biometric is possible to lead to false acceptance when two or more similar face features may result in a relatively high match score. However, it is found the False Acceptance Rate is 0.55% whereas the False Rejection Rate is 7%. By reason of the security discrepancies in the mentioned condition, therefore we proposed a fusion method whereby a genuine user will select a facial expression from the seven universal expression (i.e. happy, sad, anger, disgust, surprise, fear and neutral) as enrolled earlier in the database. For the proof of concept, it is proven in our results that even there are two or more users coincidently have the same face features, the selected facial expression will act as a password to be prominently distinguished a genuine or impostor user.

2018-02-27
Soleymani, Mohammad, Riegler, Michael, al Halvorsen, P$\backslash$a.  2017.  Multimodal Analysis of Image Search Intent: Intent Recognition in Image Search from User Behavior and Visual Content. Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval. :251–259.

Users search for multimedia content with different underlying motivations or intentions. Study of user search intentions is an emerging topic in information retrieval since understanding why a user is searching for a content is crucial for satisfying the user's need. In this paper, we aimed at automatically recognizing a user's intent for image search in the early stage of a search session. We designed seven different search scenarios under the intent conditions of finding items, re-finding items and entertainment. We collected facial expressions, physiological responses, eye gaze and implicit user interactions from 51 participants who performed seven different search tasks on a custom-built image retrieval platform. We analyzed the users' spontaneous and explicit reactions under different intent conditions. Finally, we trained machine learning models to predict users' search intentions from the visual content of the visited images, the user interactions and the spontaneous responses. After fusing the visual and user interaction features, our system achieved the F-1 score of 0.722 for classifying three classes in a user-independent cross-validation. We found that eye gaze and implicit user interactions, including mouse movements and keystrokes are the most informative features. Given that the most promising results are obtained by modalities that can be captured unobtrusively and online, the results demonstrate the feasibility of deploying such methods for improving multimedia retrieval platforms.

2017-03-08
Nakashima, Y., Koyama, T., Yokoya, N., Babaguchi, N..  2015.  Facial expression preserving privacy protection using image melding. 2015 IEEE International Conference on Multimedia and Expo (ICME). :1–6.

An enormous number of images are currently shared through social networking services such as Facebook. These images usually contain appearance of people and may violate the people's privacy if they are published without permission from each person. To remedy this privacy concern, visual privacy protection, such as blurring, is applied to facial regions of people without permission. However, in addition to image quality degradation, this may spoil the context of the image: If some people are filtered while the others are not, missing facial expression makes comprehension of the image difficult. This paper proposes an image melding-based method that modifies facial regions in a visually unintrusive way with preserving facial expression. Our experimental results demonstrated that the proposed method can retain facial expression while protecting privacy.

Xu, W., Cheung, S. c S., Soares, N..  2015.  Affect-preserving privacy protection of video. 2015 IEEE International Conference on Image Processing (ICIP). :158–162.

The prevalence of wireless networks and the convenience of mobile cameras enable many new video applications other than security and entertainment. From behavioral diagnosis to wellness monitoring, cameras are increasing used for observations in various educational and medical settings. Videos collected for such applications are considered protected health information under privacy laws in many countries. At the same time, there is an increasing need to share such video data across a wide spectrum of stakeholders including professionals, therapists and families facing similar challenges. Visual privacy protection techniques, such as blurring or object removal, can be used to mitigate privacy concern, but they also obliterate important visual cues of affect and social behaviors that are crucial for the target applications. In this paper, we propose a method of manipulating facial expression and body shape to conceal the identity of individuals while preserving the underlying affect states. The experiment results demonstrate the effectiveness of our method.