Biblio

Found 151 results

Filters: Keyword is face recognition  [Clear All Filters]
2022-07-05
Bae, Jin Hee, Kim, Minwoo, Lim, Joon S..  2021.  Emotion Detection and Analysis from Facial Image using Distance between Coordinates Feature. 2021 International Conference on Information and Communication Technology Convergence (ICTC). :494—497.
Facial expression recognition has long been established as a subject of continuous research in various fields. In this study, feature extraction was conducted by calculating the distance between facial landmarks in an image. The extracted features of the relationship between each landmark and analysis were used to classify five facial expressions. We increased the data and label reliability based on our labeling work with multiple observers. Additionally, faces were recognized from the original data, and landmark coordinates were extracted and used as features. A genetic algorithm was used to select features that were relatively more helpful for classification. We performed facial recognition classification and analysis using the method proposed in this study, which showed the validity and effectiveness of the proposed method.
2022-06-09
Tamiya, Hiroto, Isshiki, Toshiyuki, Mori, Kengo, Obana, Satoshi, Ohki, Tetsushi.  2021.  Improved Post-quantum-secure Face Template Protection System Based on Packed Homomorphic Encryption. 2021 International Conference of the Biometrics Special Interest Group (BIOSIG). :1–5.
This paper proposes an efficient face template protection system based on homomorphic encryption. By developing a message packing method suitable for the calculation of the squared Euclidean distance, the proposed system computes the squared Euclidean distance between facial features by a single homomorphic multiplication. Our experimental results show the transaction time of the proposed system is about 14 times faster than that of the existing face template protection system based on homomorphic encryption presented in BIOSIG2020.
2021-05-13
Jaafar, Fehmi, Avellaneda, Florent, Alikacem, El-Hackemi.  2020.  Demystifying the Cyber Attribution: An Exploratory Study. 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :35–40.
Current cyber attribution approaches proposed to use a variety of datasets and analytical techniques to distill the information that will be useful to identify cyber attackers. In contrast, practitioners and researchers in cyber attribution face several technical and regulation challenges. In this paper, we describe the main challenges of cyber attribution and present a state of the art of used approaches to face these challenges. Then, we will present an exploratory study to perform cyber attacks attribution based on pattern recognition from real data. In our study, we are using attack pattern discovery and identification based on real data collection and analysis.
2021-01-15
Younus, M. A., Hasan, T. M..  2020.  Effective and Fast DeepFake Detection Method Based on Haar Wavelet Transform. 2020 International Conference on Computer Science and Software Engineering (CSASE). :186—190.
DeepFake using Generative Adversarial Networks (GANs) tampered videos reveals a new challenge in today's life. With the inception of GANs, generating high-quality fake videos becomes much easier and in a very realistic manner. Therefore, the development of efficient tools that can automatically detect these fake videos is of paramount importance. The proposed DeepFake detection method takes the advantage of the fact that current DeepFake generation algorithms cannot generate face images with varied resolutions, it is only able to generate new faces with a limited size and resolution, a further distortion and blur is needed to match and fit the fake face with the background and surrounding context in the source video. This transformation causes exclusive blur inconsistency between the generated face and its background in the outcome DeepFake videos, in turn, these artifacts can be effectively spotted by examining the edge pixels in the wavelet domain of the faces in each frame compared to the rest of the frame. A blur inconsistency detection scheme relied on the type of edge and the analysis of its sharpness using Haar wavelet transform as shown in this paper, by using this feature, it can determine if the face region in a video has been blurred or not and to what extent it has been blurred. Thus will lead to the detection of DeepFake videos. The effectiveness of the proposed scheme is demonstrated in the experimental results where the “UADFV” dataset has been used for the evaluation, a very successful detection rate with more than 90.5% was gained.
2021-03-01
Hynes, E., Flynn, R., Lee, B., Murray, N..  2020.  An Evaluation of Lower Facial Micro Expressions as an Implicit QoE Metric for an Augmented Reality Procedure Assistance Application. 2020 31st Irish Signals and Systems Conference (ISSC). :1–6.
Augmented reality (AR) has been identified as a key technology to enhance worker utility in the context of increasing automation of repeatable procedures. AR can achieve this by assisting the user in performing complex and frequently changing procedures. Crucial to the success of procedure assistance AR applications is user acceptability, which can be measured by user quality of experience (QoE). An active research topic in QoE is the identification of implicit metrics that can be used to continuously infer user QoE during a multimedia experience. A user's QoE is linked to their affective state. Affective state is reflected in facial expressions. Emotions shown in micro facial expressions resemble those expressed in normal expressions but are distinguished from them by their brief duration. The novelty of this work lies in the evaluation of micro facial expressions as a continuous QoE metric by means of correlation analysis to the more traditional and accepted post-experience self-reporting. In this work, an optimal Rubik's Cube solver AR application was used as a proof of concept for complex procedure assistance. This was compared with a paper-based procedure assistance control. QoE expressed by affect in normal and micro facial expressions was evaluated through correlation analysis with post-experience reports. The results show that the AR application yielded higher task success rates and shorter task durations. Micro facial expressions reflecting disgust correlated moderately to the questionnaire responses for instruction disinterest in the AR application.
2021-01-15
Nguyen, H. M., Derakhshani, R..  2020.  Eyebrow Recognition for Identifying Deepfake Videos. 2020 International Conference of the Biometrics Special Interest Group (BIOSIG). :1—5.
Deepfake imagery that contains altered faces has become a threat to online content. Current anti-deepfake approaches usually do so by detecting image anomalies, such as visible artifacts or inconsistencies. However, with deepfake advances, these visual artifacts are becoming harder to detect. In this paper, we show that one can use biometric eyebrow matching as a tool to detect manipulated faces. Our method could provide an 0.88 AUC and 20.7% EER for deepfake detection when applied to the highest quality deepfake dataset, Celeb-DF.
2021-01-25
Rizki, R. P., Hamidi, E. A. Z., Kamelia, L., Sururie, R. W..  2020.  Image Processing Technique for Smart Home Security Based On the Principal Component Analysis (PCA) Methods. 2020 6th International Conference on Wireless and Telematics (ICWT). :1–4.
Smart home is one application of the pervasive computing branch of science. Three categories of smart homes, namely comfort, healthcare, and security. The security system is a part of smart home technology that is very important because the intensity of crime is increasing, especially in residential areas. The system will detect the face by the webcam camera if the user enters the correct password. Face recognition will be processed by the Raspberry pi 3 microcontroller with the Principal Component Analysis method using OpenCV and Python software which has outputs, namely actuators in the form of a solenoid lock door and buzzer. The test results show that the webcam can perform face detection when the password input is successful, then the buzzer actuator can turn on when the database does not match the data taken by the webcam or the test data and the solenoid door lock actuator can run if the database matches the test data taken by the sensor. webcam. The mean response time of face detection is 1.35 seconds.
2021-03-01
Sarathy, N., Alsawwaf, M., Chaczko, Z..  2020.  Investigation of an Innovative Approach for Identifying Human Face-Profile Using Explainable Artificial Intelligence. 2020 IEEE 18th International Symposium on Intelligent Systems and Informatics (SISY). :155–160.
Human identification is a well-researched topic that keeps evolving. Advancement in technology has made it easy to train models or use ones that have been already created to detect several features of the human face. When it comes to identifying a human face from the side, there are many opportunities to advance the biometric identification research further. This paper investigates the human face identification based on their side profile by extracting the facial features and diagnosing the feature sets with geometric ratio expressions. These geometric ratio expressions are computed into feature vectors. The last stage involves the use of weighted means to measure similarity. This research addresses the problem of using an eXplainable Artificial Intelligence (XAI) approach. Findings from this research, based on a small data-set, conclude that the used approach offers encouraging results. Further investigation could have a significant impact on how face profiles can be identified. Performance of the proposed system is validated using metrics such as Precision, False Acceptance Rate, False Rejection Rate and True Positive Rate. Multiple simulations indicate an Equal Error Rate of 0.89.
2021-09-30
Jain, Pranut, Pötter, Henrique, Lee, Adam J., Mósse, Daniel.  2020.  MAFIA: Multi-Layered Architecture For IoT-Based Authentication. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :199–208.
Multi-factor authentication (MFA) systems are being deployed for user authentication in online and personal device systems, whereas physical spaces mostly rely on single-factor authentication; examples are entering offices and homes, airport security, and classroom attendance. The Internet of Things (IoT) growth and market interest has created a diverse set of low-cost and flexible sensors and actuators that can be used for MFA. However, combining multiple authentication factors in a physical space adds several challenges, such as complex deployment, reduced usability, and increased energy consumption. We introduce MAFIA (Multi-layered Architecture For IoT-based Authentication), a novel architecture for co-located user authentication composed of multiple IoT devices. In MAFIA, we improve the security of physical spaces while considering usability, privacy, energy consumption, and deployment complexity. MAFIA is composed of three layers that define specific purposes for devices, guiding developers in the authentication design while providing a clear understanding of the trade-offs for different configurations. We describe a case study for an Automated Classroom Attendance System, where we evaluated three distinct types of authentication setups and showed that the most secure setup had a greater usability penalty, while the other two setups had similar attributes in terms of security, privacy, complexity, and usability but varied highly in their energy consumption.
2021-01-15
Maksutov, A. A., Morozov, V. O., Lavrenov, A. A., Smirnov, A. S..  2020.  Methods of Deepfake Detection Based on Machine Learning. 2020 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :408—411.
Nowadays, people faced an emerging problem of AI-synthesized face swapping videos, widely known as the DeepFakes. This kind of videos can be created to cause threats to privacy, fraudulence and so on. Sometimes good quality DeepFake videos recognition could be hard to distinguish with people eyes. That's why researchers need to develop algorithms to detect them. In this work, we present overview of indicators that can tell us about the fact that face swapping algorithms were used on photos. Main purpose of this paper is to find algorithm or technology that can decide whether photo was changed with DeepFake technology or not with good accuracy.
Khalid, H., Woo, S. S..  2020.  OC-FakeDect: Classifying Deepfakes Using One-class Variational Autoencoder. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2794—2803.
An image forgery method called Deepfakes can cause security and privacy issues by changing the identity of a person in a photo through the replacement of his/her face with a computer-generated image or another person's face. Therefore, a new challenge of detecting Deepfakes arises to protect individuals from potential misuses. Many researchers have proposed various binary-classification based detection approaches to detect deepfakes. However, binary-classification based methods generally require a large amount of both real and fake face images for training, and it is challenging to collect sufficient fake images data in advance. Besides, when new deepfakes generation methods are introduced, little deepfakes data will be available, and the detection performance may be mediocre. To overcome these data scarcity limitations, we formulate deepfakes detection as a one-class anomaly detection problem. We propose OC-FakeDect, which uses a one-class Variational Autoencoder (VAE) to train only on real face images and detects non-real images such as deepfakes by treating them as anomalies. Our preliminary result shows that our one class-based approach can be promising when detecting Deepfakes, achieving a 97.5% accuracy on the NeuralTextures data of the well-known FaceForensics++ benchmark dataset without using any fake images for the training process.
2021-03-29
Ozdemir, M. A., Elagoz, B., Soy, A. Alaybeyoglu, Akan, A..  2020.  Deep Learning Based Facial Emotion Recognition System. 2020 Medical Technologies Congress (TIPTEKNO). :1—4.

In this study, it was aimed to recognize the emotional state from facial images using the deep learning method. In the study, which was approved by the ethics committee, a custom data set was created using videos taken from 20 male and 20 female participants while simulating 7 different facial expressions (happy, sad, surprised, angry, disgusted, scared, and neutral). Firstly, obtained videos were divided into image frames, and then face images were segmented using the Haar library from image frames. The size of the custom data set obtained after the image preprocessing is more than 25 thousand images. The proposed convolutional neural network (CNN) architecture which is mimics of LeNet architecture has been trained with this custom dataset. According to the proposed CNN architecture experiment results, the training loss was found as 0.0115, the training accuracy was found as 99.62%, the validation loss was 0.0109, and the validation accuracy was 99.71%.

2021-02-08
Arunpandian, S., Dhenakaran, S. S..  2020.  DNA based Computing Encryption Scheme Blending Color and Gray Images. 2020 International Conference on Communication and Signal Processing (ICCSP). :0966–0970.

In this paper, a novel DNA based computing method is proposed for encryption of biometric color(face)and gray fingerprint images. In many applications of present scenario, gray and color images are exhibited major role for authenticating identity of an individual. The values of aforementioned images have considered as two separate matrices. The key generation process two level mathematical operations have applied on fingerprint image for generating encryption key. For enhancing security to biometric image, DNA computing has done on the above matrices generating DNA sequence. Further, DNA sequences have scrambled to add complexity to biometric image. Results of blending images, image of DNA computing has shown in experimental section. It is observed that the proposed substitution DNA computing algorithm has shown good resistant against statistical and differential attacks.

2021-03-29
Oğuz, K., Korkmaz, İ, Korkmaz, B., Akkaya, G., Alıcı, C., Kılıç, E..  2020.  Effect of Age and Gender on Facial Emotion Recognition. 2020 Innovations in Intelligent Systems and Applications Conference (ASYU). :1—6.

New research fields and applications on human computer interaction will emerge based on the recognition of emotions on faces. With such aim, our study evaluates the features extracted from faces to recognize emotions. To increase the success rate of these features, we have run several tests to demonstrate how age and gender affect the results. The artificial neural networks were trained by the apparent regions on the face such as eyes, eyebrows, nose, mouth, and jawline and then the networks are tested with different age and gender groups. According to the results, faces of older people have a lower performance rate of emotion recognition. Then, age and gender based groups are created manually, and we show that performance rates of facial emotion recognition have increased for the networks that are trained using these particular groups.

Pranav, E., Kamal, S., Chandran, C. Satheesh, Supriya, M. H..  2020.  Facial Emotion Recognition Using Deep Convolutional Neural Network. 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS). :317—320.

The rapid growth of artificial intelligence has contributed a lot to the technology world. As the traditional algorithms failed to meet the human needs in real time, Machine learning and deep learning algorithms have gained great success in different applications such as classification systems, recommendation systems, pattern recognition etc. Emotion plays a vital role in determining the thoughts, behaviour and feeling of a human. An emotion recognition system can be built by utilizing the benefits of deep learning and different applications such as feedback analysis, face unlocking etc. can be implemented with good accuracy. The main focus of this work is to create a Deep Convolutional Neural Network (DCNN) model that classifies 5 different human facial emotions. The model is trained, tested and validated using the manually collected image dataset.

Xu, X., Ruan, Z., Yang, L..  2020.  Facial Expression Recognition Based on Graph Neural Network. 2020 IEEE 5th International Conference on Image, Vision and Computing (ICIVC). :211—214.

Facial expressions are one of the most powerful, natural and immediate means for human being to present their emotions and intensions. In this paper, we present a novel method for fully automatic facial expression recognition. The facial landmarks are detected for characterizing facial expressions. A graph convolutional neural network is proposed for feature extraction and facial expression recognition classification. The experiments were performed on the three facial expression databases. The result shows that the proposed FER method can achieve good recognition accuracy up to 95.85% using the proposed method.

Jia, C., Li, C. L., Ying, Z..  2020.  Facial expression recognition based on the ensemble learning of CNNs. 2020 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC). :1—5.

As a part of body language, facial expression is a psychological state that reflects the current emotional state of the person. Recognition of facial expressions can help to understand others and enhance communication with others. We propose a facial expression recognition method based on convolutional neural network ensemble learning in this paper. Our model is composed of three sub-networks, and uses the SVM classifier to Integrate the output of the three networks to get the final result. The recognition accuracy of the model's expression on the FER2013 dataset reached 71.27%. The results show that the method has high test accuracy and short prediction time, and can realize real-time, high-performance facial recognition.

Alamri, M., Mahmoodi, S..  2020.  Facial Profiles Recognition Using Comparative Facial Soft Biometrics. 2020 International Conference of the Biometrics Special Interest Group (BIOSIG). :1—4.

This study extends previous advances in soft biometrics and describes to what extent soft biometrics can be used for facial profile recognition. The purpose of this research is to explore human recognition based on facial profiles in a comparative setting based on soft biometrics. Moreover, in this work, we describe and use a ranking system to determine the recognition rate. The Elo rating system is employed to rank subjects by using their face profiles in a comparative setting. The crucial features responsible for providing useful information describing facial profiles have been identified by using relative methods. Experiments based on a subset of the XM2VTSDB database demonstrate a 96% for recognition rate using 33 features over 50 subjects.

John, A., MC, A., Ajayan, A. S., Sanoop, S., Kumar, V. R..  2020.  Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction. 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). :1328—1333.

Human emotion recognition plays a vital role in interpersonal communication and human-machine interaction domain. Emotions are expressed through speech, hand gestures and by the movements of other body parts and through facial expression. Facial emotions are one of the most important factors in human communication that help us to understand, what the other person is trying to communicate. People understand only one-third of the message verbally, and two-third of it is through non-verbal means. There are many face emotion recognition (FER) systems present right now, but in real-life scenarios, they do not perform efficiently. Though there are many which claim to be a near-perfect system and to achieve the results in favourable and optimal conditions. The wide variety of expressions shown by people and the diversity in facial features of different people will not aid in the process of coming up with a system that is definite in nature. Hence developing a reliable system without any flaws showed by the existing systems is a challenging task. This paper aims to build an enhanced system that can analyse the exact facial expression of a user at that particular time and generate the corresponding emotion. Datasets like JAFFE and FER2013 were used for performance analysis. Pre-processing methods like facial landmark and HOG were incorporated into a convolutional neural network (CNN), and this has achieved good accuracy when compared with the already existing models.

Begaj, S., Topal, A. O., Ali, M..  2020.  Emotion Recognition Based on Facial Expressions Using Convolutional Neural Network (CNN). 2020 International Conference on Computing, Networking, Telecommunications Engineering Sciences Applications (CoNTESA). :58—63.

Over the last few years, there has been an increasing number of studies about facial emotion recognition because of the importance and the impact that it has in the interaction of humans with computers. With the growing number of challenging datasets, the application of deep learning techniques have all become necessary. In this paper, we study the challenges of Emotion Recognition Datasets and we also try different parameters and architectures of the Conventional Neural Networks (CNNs) in order to detect the seven emotions in human faces, such as: anger, fear, disgust, contempt, happiness, sadness and surprise. We have chosen iCV MEFED (Multi-Emotion Facial Expression Dataset) as the main dataset for our study, which is relatively new, interesting and very challenging.

2021-01-11
Kanna, J. S. Vignesh, Raj, S. M. Ebenezer, Meena, M., Meghana, S., Roomi, S. Mansoor.  2020.  Deep Learning Based Video Analytics For Person Tracking. 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE). :1—6.

As the assets of people are growing, security and surveillance have become a matter of great concern today. When a criminal activity takes place, the role of the witness plays a major role in nabbing the criminal. The witness usually states the gender of the criminal, the pattern of the criminal's dress, facial features of the criminal, etc. Based on the identification marks provided by the witness, the criminal is searched for in the surveillance cameras. Surveillance cameras are ubiquitous and finding criminals from a huge volume of surveillance video frames is a tedious process. In order to automate the search process, proposed a novel smart methodology using deep learning. This method takes gender, shirt pattern, and spectacle status as input to find out the object as person from the video log. The performance of this method achieves an accuracy of 87% in identifying the person in the video frame.

2021-03-29
Singh, S., Nasoz, F..  2020.  Facial Expression Recognition with Convolutional Neural Networks. 2020 10th Annual Computing and Communication Workshop and Conference (CCWC). :0324—0328.

Emotions are a powerful tool in communication and one way that humans show their emotions is through their facial expressions. One of the challenging and powerful tasks in social communications is facial expression recognition, as in non-verbal communication, facial expressions are key. In the field of Artificial Intelligence, Facial Expression Recognition (FER) is an active research area, with several recent studies using Convolutional Neural Networks (CNNs). In this paper, we demonstrate the classification of FER based on static images, using CNNs, without requiring any pre-processing or feature extraction tasks. The paper also illustrates techniques to improve future accuracy in this area by using pre-processing, which includes face detection and illumination correction. Feature extraction is used to extract the most prominent parts of the face, including the jaw, mouth, eyes, nose, and eyebrows. Furthermore, we also discuss the literature review and present our CNN architecture, and the challenges of using max-pooling and dropout, which eventually aided in better performance. We obtained a test accuracy of 61.7% on FER2013 in a seven-classes classification task compared to 75.2% in state-of-the-art classification.

2021-08-02
Jeste, Manasi, Gokhale, Paresh, Tare, Shrawani, Chougule, Yutika, Chaudhari, Archana.  2020.  Two-point security system for doors/lockers using Machine learning and Internet Of Things. 2020 Fourth International Conference on Inventive Systems and Control (ICISC). :740—744.
The objective of the proposed research is to develop an IOT based security system with a two-point authentication. Human face recognition and fingerprint is a known method for access authentication. A combination of both technologies and integration of the system with IoT make will make the security system more efficient and reliable. Use of online platform google firebase is made for saving database and retrieving it in real-time. In this system access to the fingerprint (touch sensor) from mobile is proposed using an android app developed in android studio and authentication for the same is also proposed. On identification of both face and fingerprint together, access to door or locker is provided.
2021-01-15
Zhu, K., Wu, B., Wang, B..  2020.  Deepfake Detection with Clustering-based Embedding Regularization. 2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC). :257—264.

In recent months, AI-synthesized face swapping videos referred to as deepfake have become an emerging problem. False video is becoming more and more difficult to distinguish, which brings a series of challenges to social security. Some scholars are devoted to studying how to improve the detection accuracy of deepfake video. At the same time, in order to conduct better research, some datasets for deepfake detection are made. Companies such as Google and Facebook have also spent huge sums of money to produce datasets for deepfake video detection, as well as holding deepfake detection competitions. The continuous advancement of video tampering technology and the improvement of video quality have also brought great challenges to deepfake detection. Some scholars have achieved certain results on existing datasets, while the results on some high-quality datasets are not as good as expected. In this paper, we propose new method with clustering-based embedding regularization for deepfake detection. We use open source algorithms to generate videos which can simulate distinctive artifacts in the deepfake videos. To improve the local smoothness of the representation space, we integrate a clustering-based embedding regularization term into the classification objective, so that the obtained model learns to resist adversarial examples. We evaluate our method on three latest deepfake datasets. Experimental results demonstrate the effectiveness of our method.

2021-09-30
Jagadamba, G, Sheeba, R, Brinda, K N, Rohini, K C, Pratik, S K.  2020.  Adaptive E-Learning Authentication and Monitoring. 2020 2nd International Conference on Innovative Mechanisms for Industry Applications (ICIMIA). :277–283.
E-learning enables the transfer of skills, knowledge, and education to a large number of recipients. The E-Learning platform has the tendency to provide face-to-face learning through a learning management system (LMS) and facilitated an improvement in traditional educational methods. The LMS saves organization time, money and easy administration. LMS also saves user time to move across the learning place by providing a web-based environment. However, a few students could be willing to exploit such a system's weakness in a bid to cheat if the conventional authentication methods are employed. In this scenario user authentication and surveillance of end user is more challenging. A system with the simultaneous authentication is put forth through multifactor adaptive authentication methods. The proposed system provides an efficient, low cost and human intervention adaptive for e-learning environment authentication and monitoring system.