Visible to the public Biblio

Filters: Keyword is facial recognition  [Clear All Filters]
2023-07-21
R, Sowmiya, G, Sivakamasundari, V, Archana.  2022.  Facial Emotion Recognition using Deep Learning Approach. 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS). :1064—1069.
Human facial emotion recognition pays a variety of applications in society. The basic idea of Facial Emotion Recognition is to map the different facial emotions to a variety of emotional states. Conventional Facial Emotion Recognition consists of two processes: extracting the features and feature selection. Nowadays, in deep learning algorithms, Convolutional Neural Networks are primarily used in Facial Emotion Recognition because of their hidden feature extraction from the images. Usually, the standard Convolutional Neural Network has simple learning algorithms with finite feature extraction layers for extracting information. The drawback of the earlier approach was that they validated only the frontal view of the photos even though the image was obtained from different angles. This research work uses a deep Convolutional Neural Network along with a DenseNet-169 as a backbone network for recognizing facial emotions. The emotion Recognition dataset was used to recognize the emotions with an accuracy of 96%.
Giri, Sarwesh, Singh, Gurchetan, Kumar, Babul, Singh, Mehakpreet, Vashisht, Deepanker, Sharma, Sonu, Jain, Prince.  2022.  Emotion Detection with Facial Feature Recognition Using CNN & OpenCV. 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE). :230—232.
Emotion Detection through Facial feature recognition is an active domain of research in the field of human-computer interaction (HCI). Humans are able to share multiple emotions and feelings through their facial gestures and body language. In this project, in order to detect the live emotions from the human facial gesture, we will be using an algorithm that allows the computer to automatically detect the facial recognition of human emotions with the help of Convolution Neural Network (CNN) and OpenCV. Ultimately, Emotion Detection is an integration of obtained information from multiple patterns. If computers will be able to understand more of human emotions, then it will mutually reduce the gap between humans and computers. In this research paper, we will demonstrate an effective way to detect emotions like neutral, happy, sad, surprise, angry, fear, and disgust from the frontal facial expression of the human in front of the live webcam.
Udeh, Chinonso Paschal, Chen, Luefeng, Du, Sheng, Li, Min, Wu, Min.  2022.  A Co-regularization Facial Emotion Recognition Based on Multi-Task Facial Action Unit Recognition. 2022 41st Chinese Control Conference (CCC). :6806—6810.
Facial emotion recognition helps feed the growth of the future artificial intelligence with the development of emotion recognition, learning, and analysis of different angles of a human face and head pose. The world's recent pandemic gave rise to the rapid installment of facial recognition for fewer applications, while emotion recognition is still within the experimental boundaries. The current challenges encountered with facial emotion recognition (FER) are the difference between background noises. Since today's world shows us that humans soon need robotics in the most significant role of human perception, attention, memory, decision-making, and human-robot interaction (HRI) needs employees. By merging the head pose as a combination towards the FER to boost the robustness in understanding emotions using the convolutional neural networks (CNN). The stochastic gradient descent with a comprehensive model is adopted by applying multi-task learning capable of implicit parallelism, inherent and better global optimizer in finding better network weights. After executing a multi-task learning model using two independent datasets, the experiment with the FER and head pose learning multi-views co-regularization frameworks were subsequently merged with validation accuracy.
Sivasangari, A., Gomathi, R. M., Anandhi, T., Roobini, Roobini, Ajitha, P..  2022.  Facial Recognition System using Decision Tree Algorithm. 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC). :1542—1546.
Face recognition technology is widely employed in a variety of applications, including public security, criminal identification, multimedia data management, and so on. Because of its importance for practical applications and theoretical issues, the facial recognition system has received a lot of attention. Furthermore, numerous strategies have been offered, each of which has shown to be a significant benefit in the field of facial and pattern recognition systems. Face recognition still faces substantial hurdles in unrestricted situations, despite these advancements. Deep learning techniques for facial recognition are presented in this paper for accurate detection and identification of facial images. The primary goal of facial recognition is to recognize and validate facial features. The database consists of 500 color images of people that have been pre-processed and features extracted using Linear Discriminant Analysis. These features are split into 70 percent for training and 30 percent for testing of decision tree classifiers for the computation of face recognition system performance.
Sadikoğlu, Fahreddin M., Idle Mohamed, Mohamed.  2022.  Facial Expression Recognition Using CNN. 2022 International Conference on Artificial Intelligence in Everything (AIE). :95—99.
Facial is the most dynamic part of the human body that conveys information about emotions. The level of diversity in facial geometry and facial look makes it possible to detect various human expressions. To be able to differentiate among numerous facial expressions of emotion, it is crucial to identify the classes of facial expressions. The methodology used in this article is based on convolutional neural networks (CNN). In this paper Deep Learning CNN is used to examine Alex net architectures. Improvements were achieved by applying the transfer learning approach and modifying the fully connected layer with the Support Vector Machine(SVM) classifier. The system succeeded by achieving satisfactory results on icv-the MEFED dataset. Improved models achieved around 64.29 %of recognition rates for the classification of the selected expressions. The results obtained are acceptable and comparable to the relevant systems in the literature provide ideas a background for further improvements.
Shiomi, Takanori, Nomiya, Hiroki, Hochin, Teruhisa.  2022.  Facial Expression Intensity Estimation Considering Change Characteristic of Facial Feature Values for Each Facial Expression. 2022 23rd ACIS International Summer Virtual Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Summer). :15—21.
Facial expression intensity, which quantifies the degree of facial expression, has been proposed. It is calculated based on how much facial feature values change compared to an expressionless face. The estimation has two aspects. One is to classify facial expressions, and the other is to estimate their intensity. However, it is difficult to do them at the same time. There- fore, in this work, the estimation of intensity and the classification of expression are separated. We suggest an explicit method and an implicit method. In the explicit one, a classifier determines which types of expression the inputs are, and each regressor determines its intensity. On the other hand, in the implicit one, we give zero values or non-zero values to regressors for each type of facial expression as ground truth, depending on whether or not an input image is the correct facial expression. We evaluated the two methods and, as a result, found that they are effective for facial expression recognition.
Lee, Gwo-Chuan, Li, Zi-Yang, Li, Tsai-Wei.  2022.  Ensemble Algorithm of Convolution Neural Networks for Enhancing Facial Expression Recognition. 2022 IEEE 5th International Conference on Knowledge Innovation and Invention (ICKII ). :111—115.
Artificial intelligence (AI) cooperates with multiple industries to improve the overall industry framework. Especially, human emotion recognition plays an indispensable role in supporting medical care, psychological counseling, crime prevention and detection, and crime investigation. The research on emotion recognition includes emotion-specific intonation patterns, literal expressions of emotions, and facial expressions. Recently, the deep learning model of facial emotion recognition aims to capture tiny changes in facial muscles to provide greater recognition accuracy. Hybrid models in facial expression recognition have been constantly proposed to improve the performance of deep learning models in these years. In this study, we proposed an ensemble learning algorithm for the accuracy of the facial emotion recognition model with three deep learning models: VGG16, InceptionResNetV2, and EfficientNetB0. To enhance the performance of these benchmark models, we applied transfer learning, fine-tuning, and data augmentation to implement the training and validation of the Facial Expression Recognition 2013 (FER-2013) Dataset. The developed algorithm finds the best-predicted value by prioritizing the InceptionResNetV2. The experimental results show that the proposed ensemble learning algorithm of priorities edges up 2.81% accuracy of the model identification. The future extension of this study ventures into the Internet of Things (IoT), medical care, and crime detection and prevention.
Churaev, Egor, Savchenko, Andrey V..  2022.  Multi-user facial emotion recognition in video based on user-dependent neural network adaptation. 2022 VIII International Conference on Information Technology and Nanotechnology (ITNT). :1—5.
In this paper, the multi-user video-based facial emotion recognition is examined in the presence of a small data set with the emotions of end users. By using the idea of speaker-dependent speech recognition, we propose a novel approach to solve this task if labeled video data from end users is available. During the training stage, a deep convolutional neural network is trained for user-independent emotion classification. Next, this classifier is adapted (fine-tuned) on the emotional video of a concrete person. During the recognition stage, the user is identified based on face recognition techniques, and an emotional model of the recognized user is applied. It is experimentally shown that this approach improves the accuracy of emotion recognition by more than 20% for the RAVDESS dataset.
Avula, Himaja, R, Ranjith, S Pillai, Anju.  2022.  CNN based Recognition of Emotion and Speech from Gestures and Facial Expressions. 2022 6th International Conference on Electronics, Communication and Aerospace Technology. :1360—1365.
The major mode of communication between hearing-impaired or mute people and others is sign language. Prior, most of the recognition systems for sign language had been set simply to recognize hand signs and convey them as text. However, the proposed model tries to provide speech to the mute. Firstly, hand gestures for sign language recognition and facial emotions are trained using CNN (Convolutional Neural Network) and then by training the emotion to speech model. Finally combining hand gestures and facial emotions to realize the emotion and speech.
Abbasi, Nida Itrat, Song, Siyang, Gunes, Hatice.  2022.  Statistical, Spectral and Graph Representations for Video-Based Facial Expression Recognition in Children. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :1725—1729.
Child facial expression recognition is a relatively less investigated area within affective computing. Children’s facial expressions differ significantly from adults; thus, it is necessary to develop emotion recognition frameworks that are more objective, descriptive and specific to this target user group. In this paper we propose the first approach that (i) constructs video-level heterogeneous graph representation for facial expression recognition in children, and (ii) predicts children’s facial expressions using the automatically detected Action Units (AUs). To this aim, we construct three separate length-independent representations, namely, statistical, spectral and graph at video-level for detailed multi-level facial behaviour decoding (AU activation status, AU temporal dynamics and spatio-temporal AU activation patterns, respectively). Our experimental results on the LIRIS Children Spontaneous Facial Expression Video Database demonstrate that combining these three feature representations provides the highest accuracy for expression recognition in children.
2023-04-14
Raavi, Rupendra, Alqarni, Mansour, Hung, Patrick C.K.  2022.  Implementation of Machine Learning for CAPTCHAs Authentication Using Facial Recognition. 2022 IEEE International Conference on Data Science and Information System (ICDSIS). :1–5.
Web-based technologies are evolving day by day and becoming more interactive and secure. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is one of the security features that help detect automated bots on the Web. Earlier captcha was complex designed text-based, but some optical recognition-based algorithms can be used to crack it. That is why now the captcha system is image-based. But after the arrival of strong image recognition algorithms, image-based captchas can also be cracked nowadays. In this paper, we propose a new captcha system that can be used to differentiate real humans and bots on the Web. We use advanced deep layers with pre-trained machine learning models for captchas authentication using a facial recognition system.
2022-07-05
Schoneveld, Liam, Othmani, Alice.  2021.  Towards a General Deep Feature Extractor for Facial Expression Recognition. 2021 IEEE International Conference on Image Processing (ICIP). :2339—2342.
The human face conveys a significant amount of information. Through facial expressions, the face is able to communicate numerous sentiments without the need for verbalisation. Visual emotion recognition has been extensively studied. Recently several end-to-end trained deep neural networks have been proposed for this task. However, such models often lack generalisation ability across datasets. In this paper, we propose the Deep Facial Expression Vector ExtractoR (DeepFEVER), a new deep learning-based approach that learns a visual feature extractor general enough to be applied to any other facial emotion recognition task or dataset. DeepFEVER outperforms state-of-the-art results on the AffectNet and Google Facial Expression Comparison datasets. DeepFEVER’s extracted features also generalise extremely well to other datasets – even those unseen during training – namely, the Real-World Affective Faces (RAF) dataset.
Bae, Jin Hee, Kim, Minwoo, Lim, Joon S..  2021.  Emotion Detection and Analysis from Facial Image using Distance between Coordinates Feature. 2021 International Conference on Information and Communication Technology Convergence (ICTC). :494—497.
Facial expression recognition has long been established as a subject of continuous research in various fields. In this study, feature extraction was conducted by calculating the distance between facial landmarks in an image. The extracted features of the relationship between each landmark and analysis were used to classify five facial expressions. We increased the data and label reliability based on our labeling work with multiple observers. Additionally, faces were recognized from the original data, and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that were relatively more helpful for classification. We performed facial recognition classification and analysis using the method proposed in this study, which showed the validity and effectiveness of the proposed method.
Liu, Weida, Fang, Jian.  2021.  Facial Expression Recognition Method Based on Cascade Convolution Neural Network. 2021 International Wireless Communications and Mobile Computing (IWCMC). :1012—1015.
In view of the problem that the convolution neural network research of facial expression recognition ignores the internal relevance of the key links, which leads to the low accuracy and speed of facial expression recognition, and can't meet the recognition requirements, a series cascade algorithm model for expression recognition of educational robot is constructed and enables the educational robot to recognize multiple students' facial expressions simultaneously, quickly and accurately in the process of movement, in the balance of the accuracy, rapidity and stability of the algorithm, based on the cascade convolution neural network model. Through the CK+ and Oulu-CASIA expression recognition database, the expression recognition experiments of this algorithm are compared with the commonly used STM-ExpLet and FN2EN cascade network algorithms. The results show that the accuracy of the expression recognition method is more than 90%. Compared with the other two commonly used cascade convolution neural network methods, the accuracy of expression recognition is significantly improved.
Wang, Caixia, Wang, Zhihui, Cui, Dong.  2021.  Facial Expression Recognition with Attention Mechanism. 2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI). :1—6.
With the development of artificial intelligence, facial expression recognition (FER) has greatly improved performance in deep learning, but there is still a lot of room for improvement in the study of combining attention to focus the network on key parts of the face. For facial expression recognition, this paper designs a network model, which use spatial transformer network to transform the input image firstly, and then adding channel attention and spatial attention to the convolutional network. In addition, in this paper, the GELU activation function is used in the convolutional network, which improves the recognition rate of facial expressions to a certain extent.
Fallah, Zahra, Ebrahimpour-Komleh, Hossein, Mousavirad, Seyed Jalaleddin.  2021.  A Novel Hybrid Pyramid Texture-Based Facial Expression Recognition. 2021 5th International Conference on Pattern Recognition and Image Analysis (IPRIA). :1—6.
Automated analysis of facial expressions is one of the most interesting and challenging problems in many areas such as human-computer interaction. Facial images are affected by many factors, such as intensity, pose and facial expressions. These factors make facial expression recognition problem a challenge. The aim of this paper is to propose a new method based on the pyramid local binary pattern (PLBP) and the pyramid local phase quantization (PLPQ), which are the extension of the local binary pattern (LBP) and the local phase quantization (LPQ) as two methods for extracting texture features. LBP operator is used to extract LBP feature in the spatial domain and LPQ operator is used to extract LPQ feature in the frequency domain. The combination of features in spatial and frequency domains can provide important information in both domains. In this paper, PLBP and PLPQ operators are separately used to extract features. Then, these features are combined to create a new feature vector. The advantage of pyramid transform domain is that it can recognize facial expressions efficiently and with high accuracy even for very low-resolution facial images. The proposed method is verified on the CK+ facial expression database. The proposed method achieves the recognition rate of 99.85% on CK+ database.
Arabian, H., Wagner-Hartl, V., Geoffrey Chase, J., Möller, K..  2021.  Facial Emotion Recognition Focused on Descriptive Region Segmentation. 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). :3415—3418.
Facial emotion recognition (FER) is useful in many different applications and could offer significant benefit as part of feedback systems to train children with Autism Spectrum Disorder (ASD) who struggle to recognize facial expressions and emotions. This project explores the potential of real time FER based on the use of local regions of interest combined with a machine learning approach. Histogram of Oriented Gradients (HOG) was implemented for feature extraction, along with 3 different classifiers, 2 based on k-Nearest Neighbor and 1 using Support Vector Machine (SVM) classification. Model performance was compared using accuracy of randomly selected validation sets after training on random training sets of the Oulu-CASIA database. Image classes were distributed evenly, and accuracies of up to 98.44% were observed with small variation depending on data distributions. The region selection methodology provided a compromise between accuracy and number of extracted features, and validated the hypothesis a focus on smaller informative regions performs just as well as the entire image.
Hu, Zhibin, Yan, Chunman.  2021.  Lightweight Multi-Scale Network with Attention for Facial Expression Recognition. 2021 4th International Conference on Advanced Electronic Materials, Computers and Software Engineering (AEMCSE). :695—698.
Aiming at the problems of the traditional convolutional neural network (CNN), such as too many parameters, single scale feature and inefficiency by some useless features, a lightweight multi-scale network with attention is proposed for facial expression recognition. The network uses the lightweight convolutional neural network model Xception and combines with the convolutional block attention module (CBAM) to learn key facial features; In addition, depthwise separable convolution module with convolution kernel of 3 × 3, 5 × 5 and 7 × 7 are used to extract features of facial expression image, and the features are fused to expand the receptive field and obtain more rich facial feature information. Experiments on facial expression datasets Fer2013 and KDEF show that the expression recognition accuracy is improved by 2.14% and 2.18% than the original Xception model, and the results further verify the effectiveness of our methods.
Sun, Lanxin, Dai, JunBo, Shen, Xunbing.  2021.  Facial emotion recognition based on LDA and Facial Landmark Detection. 2021 2nd International Conference on Artificial Intelligence and Education (ICAIE). :64—67.
Emotion recognition in the field of human-computer interaction refers to that the computer has the corresponding perceptual ability to predict the emotional state of human beings in advance by observing human expressions, behaviors and emotions, so as to ensure that computers can communicate emotionally with humans. The main research work of this paper is to extract facial image features by using Linear Discriminant Analysis (LDA) and Facial Landmark Detection after grayscale processing and cropping, and then compare the accuracy after emotion recognition and classification to determine which feature extraction method is more effective. The test results show that the accuracy rate of emotion recognition in face images can reach 73.9% by using LDA method, and 84.5% by using Facial Landmark Detection method. Therefore, facial landmarks can be used to identify emotion in face images more accurately.
Cao, HongYuan, Qi, Chao.  2021.  Facial Expression Study Based on 3D Facial Emotion Recognition. 2021 20th International Conference on Ubiquitous Computing and Communications (IUCC/CIT/DSCI/SmartCNS). :375—381.
Teaching evaluation is an indispensable key link in the modern education model. Its purpose is to promote learners' cognitive and non-cognitive development, especially emotional development. However, today's education has increasingly neglected the emotional process of learners' learning. Therefore, a method of using machines to analyze the emotional changes of learners during learning has been proposed. At present, most of the existing emotion recognition algorithms use the extraction of two-dimensional facial features from images to perform emotion prediction. Through research, it is found that the recognition rate of 2D facial feature extraction is not optimal, so this paper proposes an effective the algorithm obtains a single two-dimensional image from the input end and constructs a three-dimensional face model from the output end, thereby using 3D facial information to estimate the continuous emotion of the dimensional space and applying this method to an online learning system. Experimental results show that the algorithm has strong robustness and recognition ability.
Siyaka, Hassan Opotu, Owolabi, Olumide, Bisallah, I. Hashim.  2021.  A New Facial Image Deviation Estimation and Image Selection Algorithm (Fide-Isa) for Facial Image Recognition Systems: The Mathematical Models. 2021 1st International Conference on Multidisciplinary Engineering and Applied Science (ICMEAS). :1—7.
Deep learning models have been successful and shown to perform better in terms of accuracy and efficiency for facial recognition applications. However, they require huge amount of data samples that were well annotated to be successful. Their data requirements have led to some complications which include increased processing demands of the systems where such systems were to be deployed. Reducing the training sample sizes of deep learning models is still an open problem. This paper proposes the reduction of the number of samples required by the convolutional neutral network used in training a facial recognition system using a new Facial Image Deviation Estimation and Image Selection Algorithm (FIDE-ISA). The algorithm was used to select appropriate facial image training samples incrementally based on their facial deviation. This will reduce the need for huge dataset in training deep learning models. Preliminary results indicated a 100% accuracy for models trained with 54 images (at least 3 images per individual) and above.
2021-07-02
Haque, Shaheryar Ehsan I, Saleem, Shahzad.  2020.  Augmented reality based criminal investigation system (ARCRIME). 2020 8th International Symposium on Digital Forensics and Security (ISDFS). :1—6.
Crime scene investigation and preservation are fundamentally the pillars of forensics. Numerous cases have been discussed in this paper where mishandling of evidence or improper investigation leads to lengthy trials and even worse incorrect verdicts. Whether the problem is lack of training of first responders or any other scenario, it is essential for police officers to properly preserve the evidence. Second problem is the criminal profiling where each district department has its own method of storing information about criminals. ARCRIME intends to digitally transform the way police combat crime. It will allow police officers to create a copy of the scene of crime so that it can be presented in courts or in forensics labs. It will be in the form of wearable glasses for officers on site whereas officers during training will be wearing a headset. The trainee officers will be provided with simulations of cases which have already been resolved. Officers on scene would be provided with intelligence about the crime and the suspect they are interviewing. They would be able to create a case file with audio recording and images which can be digitally sent to a prosecution lawyer. This paper also explores the risks involved with ARCRIME and also weighs in their impact and likelihood of happening. Certain contingency plans have been highlighted in the same section as well to respond to emergency situations.
2021-03-29
Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

Singh, S., Nasoz, F..  2020.  Facial Expression Recognition with Convolutional Neural Networks. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0324—0328.

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

Zhou, J., Zhang, X., Liu, Y., Lan, X..  2020.  Facial Expression Recognition Using Spatial-Temporal Semantic Graph Network. 2020 IEEE International Conference on Image Processing (ICIP). :1961—1965.

Motions of facial components convey significant information of facial expressions. Although remarkable advancement has been made, the dynamic of facial topology has not been fully exploited. In this paper, a novel facial expression recognition (FER) algorithm called Spatial Temporal Semantic Graph Network (STSGN) is proposed to automatically learn spatial and temporal patterns through end-to-end feature learning from facial topology structure. The proposed algorithm not only has greater discriminative power to capture the dynamic patterns of facial expression and stronger generalization capability to handle different variations but also higher interpretability. Experimental evaluation on two popular datasets, CK+ and Oulu-CASIA, shows that our algorithm achieves more competitive results than other state-of-the-art methods.