Visible to the public Biblio

Filters: Keyword is Nose  [Clear All Filters]
2021-03-01
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
2019-12-30
Lian, Zheng, Li, Ya, Tao, Jianhua, Huang, Jian, Niu, Mingyue.  2018.  Region Based Robust Facial Expression Analysis. 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). :1–5.
Facial emotion recognition is an essential aspect in human-machine interaction. In the real-world conditions, it faces many challenges, i.e., illumination changes, large pose variations and partial or full occlusions, which cause different facial areas with different sharpness and completeness. Inspired by this fact, we focus on facial expression recognition based on partial faces in this paper. We compare contribution of seven facial areas of low-resolution images, including nose areas, mouse areas, eyes areas, nose to mouse areas, nose to eyes areas, mouth to eyes areas and the whole face areas. Through analysis on the confusion matrix and the class activation map, we find that mouth regions contain much emotional information compared with nose areas and eyes areas. In the meantime, considering larger facial areas is helpful to judge the expression more precisely. To sum up, contributions of this paper are two-fold: (1) We reveal concerned areas of human in emotion recognition. (2) We quantify the contribution of different facial parts.
2018-02-28
Su, J. C., Wu, C., Jiang, H., Maji, S..  2017.  Reasoning About Fine-Grained Attribute Phrases Using Reference Games. 2017 IEEE International Conference on Computer Vision (ICCV). :418–427.

We present a framework for learning to describe finegrained visual differences between instances using attribute phrases. Attribute phrases capture distinguishing aspects of an object (e.g., “propeller on the nose” or “door near the wing” for airplanes) in a compositional manner. Instances within a category can be described by a set of these phrases and collectively they span the space of semantic attributes for a category. We collect a large dataset of such phrases by asking annotators to describe several visual differences between a pair of instances within a category. We then learn to describe and ground these phrases to images in the context of a reference game between a speaker and a listener. The goal of a speaker is to describe attributes of an image that allows the listener to correctly identify it within a pair. Data collected in a pairwise manner improves the ability of the speaker to generate, and the ability of the listener to interpret visual descriptions. Moreover, due to the compositionality of attribute phrases, the trained listeners can interpret descriptions not seen during training for image retrieval, and the speakers can generate attribute-based explanations for differences between previously unseen categories. We also show that embedding an image into the semantic space of attribute phrases derived from listeners offers 20% improvement in accuracy over existing attributebased representations on the FGVC-aircraft dataset.