Visible to the public Biblio

Filters: Keyword is Face  [Clear All Filters]
2021-05-13
Fernandes, Steven, Raj, Sunny, Ewetz, Rickard, Pannu, Jodh Singh, Kumar Jha, Sumit, Ortiz, Eddy, Vintila, Iustina, Salter, Margaret.  2020.  Detecting Deepfake Videos using Attribution-Based Confidence Metric. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :1250–1259.
Recent advances in generative adversarial networks have made detecting fake videos a challenging task. In this paper, we propose the application of the state-of-the-art attribution based confidence (ABC) metric for detecting deepfake videos. The ABC metric does not require access to the training data or training the calibration model on the validation data. The ABC metric can be used to draw inferences even when only the trained model is available. Here, we utilize the ABC metric to characterize whether a video is original or fake. The deep learning model is trained only on original videos. The ABC metric uses the trained model to generate confidence values. For, original videos, the confidence values are greater than 0.94.
2021-04-08
Sarma, M. S., Srinivas, Y., Abhiram, M., Ullala, L., Prasanthi, M. S., Rao, J. R..  2017.  Insider Threat Detection with Face Recognition and KNN User Classification. 2017 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM). :39—44.
Information Security in cloud storage is a key trepidation with regards to Degree of Trust and Cloud Penetration. Cloud user community needs to ascertain performance and security via QoS. Numerous models have been proposed [2] [3] [6][7] to deal with security concerns. Detection and prevention of insider threats are concerns that also need to be tackled. Since the attacker is aware of sensitive information, threats due to cloud insider is a grave concern. In this paper, we have proposed an authentication mechanism, which performs authentication based on verifying facial features of the cloud user, in addition to username and password, thereby acting as two factor authentication. New QoS has been proposed which is capable of monitoring and detection of insider threats using Machine Learning Techniques. KNN Classification Algorithm has been used to classify users into legitimate, possibly legitimate, possibly not legitimate and not legitimate groups to verify image authenticity to conclude, whether there is any possible insider threat. A threat detection model has also been proposed for insider threats, which utilizes Facial recognition and Monitoring models. Security Method put forth in [6] [7] is honed to include threat detection QoS to earn higher degree of trust from cloud user community. As a recommendation, Threat detection module should be harnessed in private cloud deployments like Defense and Pharma applications. Experimentation has been conducted using open source Machine Learning libraries and results have been attached in this paper.
2021-03-29
Singh, S., Nasoz, F..  2020.  Facial Expression Recognition with Convolutional Neural Networks. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0324—0328.

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

Xu, X., Ruan, Z., Yang, L..  2020.  Facial Expression Recognition Based on Graph Neural Network. 2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC). :211—214.

Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.

2021-03-09
Ishak, Z., Rajendran, N., Al-Sanjary, O. I., Razali, N. A. Mat.  2020.  Secure Biometric Lock System for Files and Applications: A Review. 2020 16th IEEE International Colloquium on Signal Processing Its Applications (CSPA). :23–28.

A biometric system is a developing innovation which is utilized in different fields like forensics and security system. Finger recognition is the innovation that confirms the personality of an individual which relies upon the way that everybody has unique fingerprints. Fingerprint biometric systems are smaller in size, simple to utilize and have low power. This proposed study focuses on fingerprint biometric systems and how such a system would be implemented. If implemented, this system would have multifactor authentication strategies and improvised features based on encryption algorithms. The scanner that will be used is Biometric Fingerprint Sensor that is connected to system which determines the authorization and access control rights. All user access information is gathered by the system where the administrators can retrieve and analyse the information. This system has function of being up to date with the data changes like displaying the name of the individual for controlling security of the system.

2021-02-22
Yan, Z., Park, Y., Leau, Y., Ren-Ting, L., Hassan, R..  2020.  Hybrid Network Mobility Support in Named Data Networking. 2020 International Conference on Information Networking (ICOIN). :16–19.
Named Data Networking (NDN) is a promising Internet architecture which is expected to solve some problems (e.g., security, mobility) of the current TCP/IP architecture. The basic concept of NDN is to use named data for routing instead of using location addresses like IP address. NDN natively supports consumer mobility, but producer mobility is still a challenge and there have been quite a few researches. Considering the Internet connection such as public transport vehicles, network mobility support in NDN is important, but it is still a challenge. That is the reason that this paper proposes an efficient network mobility support scheme in NDN in terms of signaling protocols and data retrieval.
2021-01-15
Brockschmidt, J., Shang, J., Wu, J..  2019.  On the Generality of Facial Forgery Detection. 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW). :43—47.
A variety of architectures have been designed or repurposed for the task of facial forgery detection. While many of these designs have seen great success, they largely fail to address challenges these models may face in practice. A major challenge is posed by generality, wherein models must be prepared to perform in a variety of domains. In this paper, we investigate the ability of state-of-the-art facial forgery detection architectures to generalize. We first propose two criteria for generality: reliably detecting multiple spoofing techniques and reliably detecting unseen spoofing techniques. We then devise experiments which measure how a given architecture performs against these criteria. Our analysis focuses on two state-of-the-art facial forgery detection architectures, MesoNet and XceptionNet, both being convolutional neural networks (CNNs). Our experiments use samples from six state-of-the-art facial forgery techniques: Deepfakes, Face2Face, FaceSwap, GANnotation, ICface, and X2Face. We find MesoNet and XceptionNet show potential to generalize to multiple spoofing techniques but with a slight trade-off in accuracy, and largely fail against unseen techniques. We loosely extrapolate these results to similar CNN architectures and emphasize the need for better architectures to meet the challenges of generality.
Yang, X., Li, Y., Lyu, S..  2019.  Exposing Deep Fakes Using Inconsistent Head Poses. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8261—8265.
In this paper, we propose a new method to expose AI-generated fake face images or videos (commonly known as the Deep Fakes). Our method is based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images. We perform experiments to demonstrate this phenomenon and further develop a classification method based on this cue. Using features based on this cue, an SVM classifier is evaluated using a set of real face images and Deep Fakes.
Matern, F., Riess, C., Stamminger, M..  2019.  Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations. 2019 IEEE Winter Applications of Computer Vision Workshops (WACVW). :83—92.
High quality face editing in videos is a growing concern and spreads distrust in video content. However, upon closer examination, many face editing algorithms exhibit artifacts that resemble classical computer vision issues that stem from face tracking and editing. As a consequence, we wonder how difficult it is to expose artificial faces from current generators? To this end, we review current facial editing methods and several characteristic artifacts from their processing pipelines. We also show that relatively simple visual artifacts can be already quite effective in exposing such manipulations, including Deepfakes and Face2Face. Since the methods are based on visual features, they are easily explicable also to non-technical experts. The methods are easy to implement and offer capabilities for rapid adjustment to new manipulation types with little data available. Despite their simplicity, the methods are able to achieve AUC values of up to 0.866.
Akhtar, Z., Dasgupta, D..  2019.  A Comparative Evaluation of Local Feature Descriptors for DeepFakes Detection. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1—5.
The global proliferation of affordable photographing devices and readily-available face image and video editing software has caused a remarkable rise in face manipulations, e.g., altering face skin color using FaceApp. Such synthetic manipulations are becoming a very perilous problem, as altered faces not only can fool human experts but also have detrimental consequences on automated face identification systems (AFIS). Thus, it is vital to formulate techniques to improve the robustness of AFIS against digital face manipulations. The most prominent countermeasure is face manipulation detection, which aims at discriminating genuine samples from manipulated ones. Over the years, analysis of microtextural features using local image descriptors has been successfully used in various applications owing to their flexibility, computational simplicity, and performances. Therefore, in this paper, we study the possibility of identifying manipulated faces via local feature descriptors. The comparative experimental investigation of ten local feature descriptors on a new and publicly available DeepfakeTIMIT database is reported.
Yadav, D., Salmani, S..  2019.  Deepfake: A Survey on Facial Forgery Technique Using Generative Adversarial Network. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). :852—857.
"Deepfake" it is an incipiently emerging face video forgery technique predicated on AI technology which is used for creating the fake video. It takes images and video as source and it coalesces these to make a new video using the generative adversarial network and the output is very convincing. This technique is utilized for generating the unauthentic spurious video and it is capable of making it possible to generate an unauthentic spurious video of authentic people verbally expressing and doing things that they never did by swapping the face of the person in the video. Deepfake can create disputes in countries by influencing their election process by defaming the character of the politician. This technique is now being used for character defamation of celebrities and high-profile politician just by swapping the face with someone else. If it is utilized in unethical ways, this could lead to a serious problem. Someone can use this technique for taking revenge from the person by swapping face in video and then posting it to a social media platform. In this paper, working of Deepfake technique along with how it can swap faces with maximum precision in the video has been presented. Further explained are the different ways through which we can identify if the video is generated by Deepfake and its advantages and drawback have been listed.
Khalid, H., Woo, S. S..  2020.  OC-FakeDect: Classifying Deepfakes Using One-class Variational Autoencoder. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2794—2803.
An image forgery method called Deepfakes can cause security and privacy issues by changing the identity of a person in a photo through the replacement of his/her face with a computer-generated image or another person's face. Therefore, a new challenge of detecting Deepfakes arises to protect individuals from potential misuses. Many researchers have proposed various binary-classification based detection approaches to detect deepfakes. However, binary-classification based methods generally require a large amount of both real and fake face images for training, and it is challenging to collect sufficient fake images data in advance. Besides, when new deepfakes generation methods are introduced, little deepfakes data will be available, and the detection performance may be mediocre. To overcome these data scarcity limitations, we formulate deepfakes detection as a one-class anomaly detection problem. We propose OC-FakeDect, which uses a one-class Variational Autoencoder (VAE) to train only on real face images and detects non-real images such as deepfakes by treating them as anomalies. Our preliminary result shows that our one class-based approach can be promising when detecting Deepfakes, achieving a 97.5% accuracy on the NeuralTextures data of the well-known FaceForensics++ benchmark dataset without using any fake images for the training process.
Younus, M. A., Hasan, T. M..  2020.  Effective and Fast DeepFake Detection Method Based on Haar Wavelet Transform. 2020 International Conference on Computer Science and Software Engineering (CSASE). :186—190.
DeepFake using Generative Adversarial Networks (GANs) tampered videos reveals a new challenge in today's life. With the inception of GANs, generating high-quality fake videos becomes much easier and in a very realistic manner. Therefore, the development of efficient tools that can automatically detect these fake videos is of paramount importance. The proposed DeepFake detection method takes the advantage of the fact that current DeepFake generation algorithms cannot generate face images with varied resolutions, it is only able to generate new faces with a limited size and resolution, a further distortion and blur is needed to match and fit the fake face with the background and surrounding context in the source video. This transformation causes exclusive blur inconsistency between the generated face and its background in the outcome DeepFake videos, in turn, these artifacts can be effectively spotted by examining the edge pixels in the wavelet domain of the faces in each frame compared to the rest of the frame. A blur inconsistency detection scheme relied on the type of edge and the analysis of its sharpness using Haar wavelet transform as shown in this paper, by using this feature, it can determine if the face region in a video has been blurred or not and to what extent it has been blurred. Thus will lead to the detection of DeepFake videos. The effectiveness of the proposed scheme is demonstrated in the experimental results where the “UADFV” dataset has been used for the evaluation, a very successful detection rate with more than 90.5% was gained.
2020-12-11
Fan, M., Luo, X., Liu, J., Wang, M., Nong, C., Zheng, Q., Liu, T..  2019.  Graph Embedding Based Familial Analysis of Android Malware using Unsupervised Learning. 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). :771—782.

The rapid growth of Android malware has posed severe security threats to smartphone users. On the basis of the familial trait of Android malware observed by previous work, the familial analysis is a promising way to help analysts better focus on the commonalities of malware samples within the same families, thus reducing the analytical workload and accelerating malware analysis. The majority of existing approaches rely on supervised learning and face three main challenges, i.e., low accuracy, low efficiency, and the lack of labeled dataset. To address these challenges, we first construct a fine-grained behavior model by abstracting the program semantics into a set of subgraphs. Then, we propose SRA, a novel feature that depicts the similarity relationships between the Structural Roles of sensitive API call nodes in subgraphs. An SRA is obtained based on graph embedding techniques and represented as a vector, thus we can effectively reduce the high complexity of graph matching. After that, instead of training a classifier with labeled samples, we construct malware link network based on SRAs and apply community detection algorithms on it to group the unlabeled samples into groups. We implement these ideas in a system called GefDroid that performs Graph embedding based familial analysis of AnDroid malware using unsupervised learning. Moreover, we conduct extensive experiments to evaluate GefDroid on three datasets with ground truth. The results show that GefDroid can achieve high agreements (0.707-0.883 in term of NMI) between the clustering results and the ground truth. Furthermore, GefDroid requires only linear run-time overhead and takes around 8.6s to analyze a sample on average, which is considerably faster than the previous work.

2020-11-04
Howard, J. J., Blanchard, A. J., Sirotin, Y. B., Hasselgren, J. A., Vemury, A. R..  2018.  An Investigation of High-Throughput Biometric Systems: Results of the 2018 Department of Homeland Security Biometric Technology Rally. 2018 IEEE 9th International Conference on Biometrics Theory, Applications and Systems (BTAS). :1—7.

The 2018 Biometric Technology Rally was an evaluation, sponsored by the U.S. Department of Homeland Security, Science and Technology Directorate (DHS S&T), that challenged industry to provide face or face/iris systems capable of unmanned, traveler identification in a high-throughput security environment. Selected systems were installed at the Maryland Test Facility (MdTF), a DHS S&T affiliated bio-metrics testing laboratory, and evaluated using a population of 363 naive human subjects recruited from the general public. The performance of each system was examined based on measured throughput, capture capability, matching capability, and user satisfaction metrics. This research documents the performance of unmanned face and face/iris systems required to maintain an average total subject interaction time of less than 10 seconds. The results highlight discrepancies between the performance of biometric systems as anticipated by the system designers and the measured performance, indicating an incomplete understanding of the main determinants of system performance. Our research shows that failure-to-acquire errors, unpredicted by system designers, were the main driver of non-identification rates instead of failure-to-match errors, which were better predicted. This outcome indicates the need for a renewed focus on reducing the failure-to-acquire rate in high-throughput, unmanned biometric systems.

2020-09-14
Sivaram, M., Ahamed A, Mohamed Uvaze, Yuvaraj, D., Megala, G., Porkodi, V., Kandasamy, Manivel.  2019.  Biometric Security and Performance Metrics: FAR, FER, CER, FRR. 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). :770–772.
Biometrics manages the computerized acknowledgment of people dependent on natural and social attributes. The example acknowledgment framework perceives an individual by deciding the credibility of a particular conduct normal for person. The primary rule of biometric framework is recognizable proof and check. A biometric confirmation framework use fingerprints, face, hand geometry, iris, and voice, mark, and keystroke elements of a person to recognize an individual or to check a guaranteed character. Biometrics authentication is a form of identification and access control process which identify individuals in packs that are under reconnaissance. Biometric security system increase in the overall security and individuals no longer have to deal with lost ID Cards or forgotten passwords. It helps much organization to see everyone is at a certain time when something might have happened that needs reviewed. The current issues in biometric system with individuals and many organization facing are personal privacy, expensive, data's may be stolen.
2020-08-28
Aanjanadevi, S., Palanisamy, V., Aanjankumar, S..  2019.  An Improved Method for Generating Biometric-Cryptographic System from Face Feature. 2019 3rd International Conference on Trends in Electronics and Informatics (ICOEI). :1076—1079.
One of the most difficult tasks in networking is to provide security to data during transmission, the main issue using network is lack of security. Various techniques and methods had been introduced to satisfy the needs to enhance the firmness of the data while transmitting over internet. Due to several reasons and intruders the mechanism of providing security becomes a tedious task. At first conventional passwords are used to provide security to data while storing and transmitting but remembering the password quite confusing and difficult for the user to access the data. After that cryptography methodology is introduced to protect the data from the intruders by converting readable form of data into unreadable data by encryption process. Then the data is processed and received the receiver can access the original data by the reverse process of encryption called decryption. The processes of encoding have broken by intruders using various combinations of keys. In this proposed work strong encryption key can be generated by combining biometric and cryptography methods for enhancing firmness of data. Here biometric face image is pre-processed at initial stage then facial features are extracted to generate biometric-cryptographic key. After generating bio-crypto key data can be encrypted along with newly produced key with 0's or 1's bit combination and stored in the database. By generating bio-crypto key and using them for transmitting or storing the data the privacy and firmness of the data can be enhanced and by using own biometrics as key the process of hacking and interfere of intruders to access the data can be minimized.
Parafita, Álvaro, Vitrià, Jordi.  2019.  Explaining Visual Models by Causal Attribution. 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW). :4167—4175.

Model explanations based on pure observational data cannot compute the effects of features reliably, due to their inability to estimate how each factor alteration could affect the rest. We argue that explanations should be based on the causal model of the data and the derived intervened causal models, that represent the data distribution subject to interventions. With these models, we can compute counterfactuals, new samples that will inform us how the model reacts to feature changes on our input. We propose a novel explanation methodology based on Causal Counterfactuals and identify the limitations of current Image Generative Models in their application to counterfactual creation.

2020-08-14
Jin, Zhe, Chee, Kong Yik, Xia, Xin.  2019.  What Do Developers Discuss about Biometric APIs? 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME). :348—352.
With the emergence of biometric technology in various applications, such as access control (e.g. mobile lock/unlock), financial transaction (e.g. Alibaba smile-to-pay) and time attendance, the development of biometric system attracts increasingly interest to the developers. Despite a sound biometric system gains the security assurance and great usability, it is a rather challenging task to develop an effective biometric system. For instance, many public available biometric APIs do not provide sufficient instructions / precise documentations on the usage of biometric APIs. Many developers are struggling in implementing these APIs in various tasks. Moreover, quick update on biometric-based algorithms (e.g. feature extraction and matching) may propagate to APIs, which leads to potential confusion to the system developers. Hence, we conduct an empirical study to the problems that the developers currently encountered while implementing the biometric APIs as well as the issues that need to be addressed when developing biometric systems using these APIs. We manually analyzed a total of 500 biometric API-related posts from various online media such as Stack Overflow and Neurotechnology. We reveal that 1) most of the problems encountered are related to the lack of precise documentation on the biometric APIs; 2) the incompatibility of biometric APIs cross multiple implementation environments.
2020-08-10
Zhang, Xinman, He, Tingting, Xu, Xuebin.  2019.  Android-Based Smartphone Authentication System Using Biometric Techniques: A Review. 2019 4th International Conference on Control, Robotics and Cybernetics (CRC). :104–108.
As the technological progress of mobile Internet, smartphone based on Android OS accounts for the vast majority of market share. The traditional encryption technology cannot resolve the dilemma in smartphone information leakage, and the Android-based authentication system in view of biometric recognition emerge to offer more reliable information assurance. In this paper, we summarize several biometrics providing their attributes. Furthermore, we also review the algorithmic framework and performance index acting on authentication techniques. Thus, typical identity authentication systems including their experimental results are concluded and analyzed in the survey. The article is written with an intention to provide an in-depth overview of Android-based biometric verification systems to the readers.
2020-07-16
Biancardi, Beatrice, Wang, Chen, Mancini, Maurizio, Cafaro, Angelo, Chanel, Guillaume, Pelachaud, Catherine.  2019.  A Computational Model for Managing Impressions of an Embodied Conversational Agent in Real-Time. 2019 8th International Conference on Affective Computing and Intelligent Interaction (ACII). :1—7.

This paper presents a computational model for managing an Embodied Conversational Agent's first impressions of warmth and competence towards the user. These impressions are important to manage because they can impact users' perception of the agent and their willingness to continue the interaction with the agent. The model aims at detecting user's impression of the agent and producing appropriate agent's verbal and nonverbal behaviours in order to maintain a positive impression of warmth and competence. User's impressions are recognized using a machine learning approach with facial expressions (action units) which are important indicators of users' affective states and intentions. The agent adapts in real-time its verbal and nonverbal behaviour, with a reinforcement learning algorithm that takes user's impressions as reward to select the most appropriate combination of verbal and non-verbal behaviour to perform. A user study to test the model in a contextualized interaction with users is also presented. Our hypotheses are that users' ratings differs when the agents adapts its behaviour according to our reinforcement learning algorithm, compared to when the agent does not adapt its behaviour to user's reactions (i.e., when it randomly selects its behaviours). The study shows a general tendency for the agent to perform better when using our model than in the random condition. Significant results shows that user's ratings about agent's warmth are influenced by their a-priori about virtual characters, as well as that users' judged the agent as more competent when it adapted its behaviour compared to random condition.

2020-06-26
Karthika, P., Babu, R. Ganesh, Nedumaran, A..  2019.  Machine Learning Security Allocation in IoT. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). :474—478.

The progressed computational abilities of numerous asset compelled gadgets mobile phones have empowered different research zones including picture recovery from enormous information stores for various IoT applications. The real difficulties for picture recovery utilizing cell phones in an IoT situation are the computational intricacy and capacity. To manage enormous information in IoT condition for picture recovery a light-weighted profound learning base framework for vitality obliged gadgets. The framework initially recognizes and crop face areas from a picture utilizing Viola-Jones calculation with extra face classifier to take out the identification issue. Besides, the utilizes convolutional framework layers of a financially savvy pre-prepared CNN demonstrate with characterized highlights to speak to faces. Next, highlights of the huge information vault are listed to accomplish a quicker coordinating procedure for constant recovery. At long last, Euclidean separation is utilized to discover comparability among question and archive pictures. For exploratory assessment, we made a nearby facial pictures dataset it including equally single and gathering face pictures. In the dataset can be utilized by different specialists as a scale for examination with other ongoing facial picture recovery frameworks. The trial results demonstrate that our planned framework beats other cutting edge highlight extraction strategies as far as proficiency and recovery for IoT-helped vitality obliged stages.

2020-06-19
Ly, Son Thai, Do, Nhu-Tai, Lee, Guee-Sang, Kim, Soo-Hyung, Yang, Hyung-Jeong.  2019.  A 3d Face Modeling Approach for in-The-Wild Facial Expression Recognition on Image Datasets. 2019 IEEE International Conference on Image Processing (ICIP). :3492—3496.

This paper explores the benefits of 3D face modeling for in-the-wild facial expression recognition (FER). Since there is limited in-the-wild 3D FER dataset, we first construct 3D facial data from available 2D dataset using recent advances in 3D face reconstruction. The 3D facial geometry representation is then extracted by deep learning technique. In addition, we also take advantage of manipulating the 3D face, such as using 2D projected images of 3D face as additional input for FER. These features are then fused with that of 2D FER typical network. By doing so, despite using common approaches, we achieve a competent recognition accuracy on Real-World Affective Faces (RAF) database and Static Facial Expressions in the Wild (SFEW 2.0) compared with the state-of-the-art reports. To the best of our knowledge, this is the first time such a deep learning combination of 3D and 2D facial modalities is presented in the context of in-the-wild FER.

Saboor khan, Abdul, Shafi, Imran, Anas, Muhammad, Yousuf, Bilal M, Abbas, Muhammad Jamshed, Noor, Aqib.  2019.  Facial Expression Recognition using Discrete Cosine Transform Artificial Neural Network. 2019 22nd International Multitopic Conference (INMIC). :1—5.

Every so often Humans utilize non-verbal gestures (e.g. facial expressions) to express certain information or emotions. Moreover, countless face gestures are expressed throughout the day because of the capabilities possessed by humans. However, the channels of these expression/emotions can be through activities, postures, behaviors & facial expressions. Extensive research unveiled that there exists a strong relationship between the channels and emotions which has to be further investigated. An Automatic Facial Expression Recognition (AFER) framework has been proposed in this work that can predict or anticipate seven universal expressions. In order to evaluate the proposed approach, Frontal face Image Database also named as Japanese Female Facial Expression (JAFFE) is opted as input. This database is further processed with a frequency domain technique known as Discrete Cosine transform (DCT) and then classified using Artificial Neural Networks (ANN). So as to check the robustness of this novel strategy, the random trial of K-fold cross validation, leave one out and person independent methods is repeated many times to provide an overview of recognition rates. The experimental results demonstrate a promising performance of this application.

Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.