Visible to the public Facial Expression Recognition with Convolutional Neural Networks

TitleFacial Expression Recognition with Convolutional Neural Networks
Publication TypeConference Paper
Year of Publication2020
AuthorsSingh, S., Nasoz, F.
Conference Name2020 10th Annual Computing and Communication Workshop and Conference (CCWC)
Date PublishedJan. 2020
PublisherIEEE
ISBN Number978-1-7281-3783-4
Keywordsartificial intelligence, Artificial Intelligence (AI), classification, CNN architecture, Computer architecture, convolutional neural nets, convolutional neural networks, convolutional neural networks (cnns), emotion recognition, Face, Face detection, face recognition, Facial Action Coding System (FACS), facial expression recognition, Facial expression recognition (FER), facial recognition, feature extraction, Human Behavior, illumination correction, image classification, lighting, Metrics, nonverbal communication, pre-processing, pubcrawl, resilience, Resiliency, social communications, Task Analysis
Abstract

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

URLhttps://ieeexplore.ieee.org/document/9031283
DOI10.1109/CCWC47524.2020.9031283
Citation Keysingh_facial_2020