Biblio

Found 151 results

Filters: Keyword is face recognition  [Clear All Filters]
2020-08-28
Pradhan, Chittaranjan, Banerjee, Debanjan, Nandy, Nabarun, Biswas, Udita.  2019.  Generating Digital Signature using Facial Landmlark Detection. 2019 International Conference on Communication and Signal Processing (ICCSP). :0180—0184.
Information security has developed rapidly over the recent years with a key being the emergence of social media. To standardize this discipline, security of an individual becomes an urgent concern. In 2019, it is estimated that there will be over 2.5 billion social media users around the globe. Unfortunately, anonymous identity has become a major concern for the security advisors. Due to the technological advancements, the phishers are able to access the confidential information. To resolve these issues numerous solutions have been proposed, such as biometric identification, facial and audio recognition etc prior access to any highly secure forum on the web. Generating digital signatures is the recent trend being incorporated in the field of digital security. We have designed an algorithm that after generating 68 point facial landmark, converts the image to a highly compressed and secure digital signature. The proposed algorithm generates a unique signature for an individual which when stored in the user account information database will limit the creation of fake or multiple accounts. At the same time the algorithm reduces the database storage overhead as it stores the facial identity of an individual in the form of a compressed textual signature rather than the traditional method where the image file was being stored, occupying lesser amount of space and making it more efficient in terms of searching, fetching and manipulation. A unique new analysis of the features produced at intermediate layers has been applied. Here, we opt to use the normal and two opposites' angular measures of the triangle as the invariance. It simply acts as the real-time optimized encryption procedure to achieve the reliable security goals explained in detail in the later sections.
2020-06-19
Mundra, Saloni, Sujata, Mitra, Suman K..  2019.  Modular Facial Expression Recognition on Noisy Data Using Robust PCA. 2019 IEEE 16th India Council International Conference (INDICON). :1—4.
2020-02-10
Mowla, Nishat I, Doh, Inshil, Chae, Kijoon.  2019.  Binarized Multi-Factor Cognitive Detection of Bio-Modality Spoofing in Fog Based Medical Cyber-Physical System. 2019 International Conference on Information Networking (ICOIN). :43–48.
Bio-modalities are ideal for user authentication in Medical Cyber-Physical Systems. Various forms of bio-modalities, such as the face, iris, fingerprint, are commonly used for secure user authentication. Concurrently, various spoofing approaches have also been developed over time which can fail traditional bio-modality detection systems. Image synthesis with play-doh, gelatin, ecoflex etc. are some of the ways used in spoofing bio-identifiable property. Since the bio-modality detection sensors are small and resource constrained, heavy-weight detection mechanisms are not suitable for these sensors. Recently, Fog based architectures are proposed to support sensor management in the Medical Cyber-Physical Systems (MCPS). A thin software client running in these resource-constrained sensors can enable communication with fog nodes for better management and analysis. Therefore, we propose a fog-based security application to detect bio-modality spoofing in a Fog based MCPS. In this regard, we propose a machine learning based security algorithm run as an application at the fog node using a binarized multi-factor boosted ensemble learner algorithm coupled with feature selection. Our proposal is verified on real datasets provided by the Replay Attack, Warsaw and LiveDet 2015 Crossmatch benchmark for face, iris and fingerprint modality spoofing detection used for authentication in an MCPS. The experimental analysis shows that our approach achieves significant performance gain over the state-of-the-art approaches.
2019-12-30
Toliupa, Serhiy, Tereikovskiy, Ihor, Dychka, Ivan, Tereikovska, Liudmyla, Trush, Alexander.  2019.  The Method of Using Production Rules in Neural Network Recognition of Emotions by Facial Geometry. 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT). :323–327.
The article is devoted to the improvement of neural network means of recognition of emotions on human geometry, which are defined for use in information systems of general purpose. It is shown that modern means of emotional recognition are based on the usual networks of critical disadvantage, because there is a lack of accuracy of recognition under the influence of purchased, characteristic of general-purpose information systems. It is determined that the above remarks relate to the turning of the face and the size of the image. A typical approach to overcoming this disadvantage through training is unacceptable for all protection options that are inappropriate for reasons of duration and compilation of the required training sample. It is proposed to increase the accuracy of recognition by submitting an expert data model to the neural network. An appropriate method for representing expert knowledge is developed. A feature of the method is the use of productive rules and the PNN neural network. Experimental verification of the developed solutions has been carried out. The obtained results allow to increase the efficiency of the termination and disclosure of the set of age networks, the characteristics of which are not presented in the registered statistical data.
2020-06-19
Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.

2020-09-04
Song, Chengru, Xu, Changqiao, Yang, Shujie, Zhou, Zan, Gong, Changhui.  2019.  A Black-Box Approach to Generate Adversarial Examples Against Deep Neural Networks for High Dimensional Input. 2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC). :473—479.
Generating adversarial samples is gathering much attention as an intuitive approach to evaluate the robustness of learning models. Extensive recent works have demonstrated that numerous advanced image classifiers are defenseless to adversarial perturbations in the white-box setting. However, the white-box setting assumes attackers to have prior knowledge of model parameters, which are generally inaccessible in real world cases. In this paper, we concentrate on the hard-label black-box setting where attackers can only pose queries to probe the model parameters responsible for classifying different images. Therefore, the issue is converted into minimizing non-continuous function. A black-box approach is proposed to address both massive queries and the non-continuous step function problem by applying a combination of a linear fine-grained search, Fibonacci search, and a zeroth order optimization algorithm. However, the input dimension of a image is so high that the estimation of gradient is noisy. Hence, we adopt a zeroth-order optimization method in high dimensions. The approach converts calculation of gradient into a linear regression model and extracts dimensions that are more significant. Experimental results illustrate that our approach can relatively reduce the amount of queries and effectively accelerate convergence of the optimization method.
2020-06-03
Amato, Giuseppe, Falchi, Fabrizio, Gennaro, Claudio, Massoli, Fabio Valerio, Passalis, Nikolaos, Tefas, Anastasios, Trivilini, Alessandro, Vairo, Claudio.  2019.  Face Verification and Recognition for Digital Forensics and Information Security. 2019 7th International Symposium on Digital Forensics and Security (ISDFS). :1—6.

In this paper, we present an extensive evaluation of face recognition and verification approaches performed by the European COST Action MULTI-modal Imaging of FOREnsic SciEnce Evidence (MULTI-FORESEE). The aim of the study is to evaluate various face recognition and verification methods, ranging from methods based on facial landmarks to state-of-the-art off-the-shelf pre-trained Convolutional Neural Networks (CNN), as well as CNN models directly trained for the task at hand. To fulfill this objective, we carefully designed and implemented a realistic data acquisition process, that corresponds to a typical face verification setup, and collected a challenging dataset to evaluate the real world performance of the aforementioned methods. Apart from verifying the effectiveness of deep learning approaches in a specific scenario, several important limitations are identified and discussed through the paper, providing valuable insight for future research directions in the field.

2020-12-01
Goel, A., Agarwal, A., Vatsa, M., Singh, R., Ratha, N..  2019.  DeepRing: Protecting Deep Neural Network With Blockchain. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :2821—2828.

Several computer vision applications such as object detection and face recognition have started to completely rely on deep learning based architectures. These architectures, when paired with appropriate loss functions and optimizers, produce state-of-the-art results in a myriad of problems. On the other hand, with the advent of "blockchain", the cybersecurity industry has developed a new sense of trust which was earlier missing from both the technical and commercial perspectives. Employment of cryptographic hash as well as symmetric/asymmetric encryption and decryption algorithms ensure security without any human intervention (i.e., centralized authority). In this research, we present the synergy between the best of both these worlds. We first propose a model which uses the learned parameters of a typical deep neural network and is secured from external adversaries by cryptography and blockchain technology. As the second contribution of the proposed research, a new parameter tampering attack is proposed to properly justify the role of blockchain in machine learning.

2020-06-26
Shengquan, Wang, Xianglong, Li, Ang, Li, Shenlong, Jiang.  2019.  Research on Iris Edge Detection Technology based on Daugman Algorithm. 2019 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :308—311.

In the current society, people pay more and more attention to identity security, especially in the case of some highly confidential or personal privacy, one-to-one identification is particularly important. The iris recognition just has the characteristics of high efficiency, not easy to be counterfeited, etc., which has been promoted as an identity technology. This paper has carried out research on daugman algorithm and iris edge detection.

2019-12-30
Taha, Bilal, Hatzinakos, Dimitrios.  2019.  Emotion Recognition from 2D Facial Expressions. 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE). :1–4.
This work proposes an approach to find and learn informative representations from 2 dimensional gray-level images for facial expression recognition application. The learned features are obtained from a designed convolutional neural network (CNN). The developed CNN enables us to learn features from the images in a highly efficient manner by cascading different layers together. The developed model is computationally efficient since it does not consist of a huge number of layers and at the same time it takes into consideration the overfitting problem. The outcomes from the developed CNN are compared to handcrafted features that span texture and shape features. The experiments conducted on the Bosphours database show that the developed CNN model outperforms the handcrafted features when coupled with a Support Vector Machines (SVM) classifier.
2021-01-15
Yang, X., Li, Y., Lyu, S..  2019.  Exposing Deep Fakes Using Inconsistent Head Poses. ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :8261—8265.
In this paper, we propose a new method to expose AI-generated fake face images or videos (commonly known as the Deep Fakes). Our method is based on the observations that Deep Fakes are created by splicing synthesized face region into the original image, and in doing so, introducing errors that can be revealed when 3D head poses are estimated from the face images. We perform experiments to demonstrate this phenomenon and further develop a classification method based on this cue. Using features based on this cue, an SVM classifier is evaluated using a set of real face images and Deep Fakes.
2020-06-26
Karthika, P., Babu, R. Ganesh, Nedumaran, A..  2019.  Machine Learning Security Allocation in IoT. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). :474—478.

The progressed computational abilities of numerous asset compelled gadgets mobile phones have empowered different research zones including picture recovery from enormous information stores for various IoT applications. The real difficulties for picture recovery utilizing cell phones in an IoT situation are the computational intricacy and capacity. To manage enormous information in IoT condition for picture recovery a light-weighted profound learning base framework for vitality obliged gadgets. The framework initially recognizes and crop face areas from a picture utilizing Viola-Jones calculation with extra face classifier to take out the identification issue. Besides, the utilizes convolutional framework layers of a financially savvy pre-prepared CNN demonstrate with characterized highlights to speak to faces. Next, highlights of the huge information vault are listed to accomplish a quicker coordinating procedure for constant recovery. At long last, Euclidean separation is utilized to discover comparability among question and archive pictures. For exploratory assessment, we made a nearby facial pictures dataset it including equally single and gathering face pictures. In the dataset can be utilized by different specialists as a scale for examination with other ongoing facial picture recovery frameworks. The trial results demonstrate that our planned framework beats other cutting edge highlight extraction strategies as far as proficiency and recovery for IoT-helped vitality obliged stages.

2021-01-15
Akhtar, Z., Dasgupta, D..  2019.  A Comparative Evaluation of Local Feature Descriptors for DeepFakes Detection. 2019 IEEE International Symposium on Technologies for Homeland Security (HST). :1—5.
The global proliferation of affordable photographing devices and readily-available face image and video editing software has caused a remarkable rise in face manipulations, e.g., altering face skin color using FaceApp. Such synthetic manipulations are becoming a very perilous problem, as altered faces not only can fool human experts but also have detrimental consequences on automated face identification systems (AFIS). Thus, it is vital to formulate techniques to improve the robustness of AFIS against digital face manipulations. The most prominent countermeasure is face manipulation detection, which aims at discriminating genuine samples from manipulated ones. Over the years, analysis of microtextural features using local image descriptors has been successfully used in various applications owing to their flexibility, computational simplicity, and performances. Therefore, in this paper, we study the possibility of identifying manipulated faces via local feature descriptors. The comparative experimental investigation of ten local feature descriptors on a new and publicly available DeepfakeTIMIT database is reported.
Brockschmidt, J., Shang, J., Wu, J..  2019.  On the Generality of Facial Forgery Detection. 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems Workshops (MASSW). :43—47.
A variety of architectures have been designed or repurposed for the task of facial forgery detection. While many of these designs have seen great success, they largely fail to address challenges these models may face in practice. A major challenge is posed by generality, wherein models must be prepared to perform in a variety of domains. In this paper, we investigate the ability of state-of-the-art facial forgery detection architectures to generalize. We first propose two criteria for generality: reliably detecting multiple spoofing techniques and reliably detecting unseen spoofing techniques. We then devise experiments which measure how a given architecture performs against these criteria. Our analysis focuses on two state-of-the-art facial forgery detection architectures, MesoNet and XceptionNet, both being convolutional neural networks (CNNs). Our experiments use samples from six state-of-the-art facial forgery techniques: Deepfakes, Face2Face, FaceSwap, GANnotation, ICface, and X2Face. We find MesoNet and XceptionNet show potential to generalize to multiple spoofing techniques but with a slight trade-off in accuracy, and largely fail against unseen techniques. We loosely extrapolate these results to similar CNN architectures and emphasize the need for better architectures to meet the challenges of generality.
2020-06-19
Saboor khan, Abdul, Shafi, Imran, Anas, Muhammad, Yousuf, Bilal M, Abbas, Muhammad Jamshed, Noor, Aqib.  2019.  Facial Expression Recognition using Discrete Cosine Transform Artificial Neural Network. 2019 22nd International Multitopic Conference (INMIC). :1—5.

Every so often Humans utilize non-verbal gestures (e.g. facial expressions) to express certain information or emotions. Moreover, countless face gestures are expressed throughout the day because of the capabilities possessed by humans. However, the channels of these expression/emotions can be through activities, postures, behaviors & facial expressions. Extensive research unveiled that there exists a strong relationship between the channels and emotions which has to be further investigated. An Automatic Facial Expression Recognition (AFER) framework has been proposed in this work that can predict or anticipate seven universal expressions. In order to evaluate the proposed approach, Frontal face Image Database also named as Japanese Female Facial Expression (JAFFE) is opted as input. This database is further processed with a frequency domain technique known as Discrete Cosine transform (DCT) and then classified using Artificial Neural Networks (ANN). So as to check the robustness of this novel strategy, the random trial of K-fold cross validation, leave one out and person independent methods is repeated many times to provide an overview of recognition rates. The experimental results demonstrate a promising performance of this application.

Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Facial Expression Recognition Using Merged Convolution Neural Network. 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE). :296—298.

In this paper, a merged convolution neural network (MCNN) is proposed to improve the accuracy and robustness of real-time facial expression recognition (FER). Although there are many ways to improve the performance of facial expression recognition, a revamp of the training framework and image preprocessing renders better results in applications. When the camera is capturing images at high speed, however, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of human facial expression. To solve this problem, we propose a statistical method for recognition results obtained from previous images, instead of using the current recognition output. Experimental results show that the proposed method can satisfactorily recognize seven basic facial expressions in real time.

2020-08-28
Ahmed, Asraa, Hasan, Taha, Abdullatif, Firas A., T., Mustafa S., Rahim, Mohd Shafry Mohd.  2019.  A Digital Signature System Based on Real Time Face Recognition. 2019 IEEE 9th International Conference on System Engineering and Technology (ICSET). :298—302.

This study proposed a biometric-based digital signature scheme proposed for facial recognition. The scheme is designed and built to verify the person’s identity during a registration process and retrieve their public and private keys stored in the database. The RSA algorithm has been used as asymmetric encryption method to encrypt hashes generated for digital documents. It uses the hash function (SHA-256) to generate digital signatures. In this study, local binary patterns histograms (LBPH) were used for facial recognition. The facial recognition method was evaluated on ORL faces retrieved from the database of Cambridge University. From the analysis, the LBPH algorithm achieved 97.5% accuracy; the real-time testing was done on thirty subjects and it achieved 94% recognition accuracy. A crypto-tool software was used to perform the randomness test on the proposed RSA and SHA256.

2020-11-09
Zhang, T., Wang, R., Ding, J., Li, X., Li, B..  2018.  Face Recognition Based on Densely Connected Convolutional Networks. 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM). :1–6.
The face recognition methods based on convolutional neural network have achieved great success. The existing model usually used the residual network as the core architecture. The residual network is good at reusing features, but it is difficult to explore new features. And the densely connected network can be used to explore new features. We proposed a face recognition model named Dense Face to explore the performance of densely connected network in face recognition. The model is based on densely connected convolutional neural network and composed of Dense Block layers, transition layers and classification layer. The model was trained with the joint supervision of center loss and softmax loss through feature normalization and enabled the convolutional neural network to learn more discriminative features. The Dense Face model was trained using the public available CASIA-WebFace dataset and was tested on the LFW and the CAS-PEAL-Rl datasets. Experimental results showed that the densely connected convolutional neural network has achieved higher face verification accuracy and has better robustness than other model such as VGG Face and ResNet model.
2019-12-30
Lian, Zheng, Li, Ya, Tao, Jianhua, Huang, Jian, Niu, Mingyue.  2018.  Region Based Robust Facial Expression Analysis. 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). :1–5.
Facial emotion recognition is an essential aspect in human-machine interaction. In the real-world conditions, it faces many challenges, i.e., illumination changes, large pose variations and partial or full occlusions, which cause different facial areas with different sharpness and completeness. Inspired by this fact, we focus on facial expression recognition based on partial faces in this paper. We compare contribution of seven facial areas of low-resolution images, including nose areas, mouse areas, eyes areas, nose to mouse areas, nose to eyes areas, mouth to eyes areas and the whole face areas. Through analysis on the confusion matrix and the class activation map, we find that mouth regions contain much emotional information compared with nose areas and eyes areas. In the meantime, considering larger facial areas is helpful to judge the expression more precisely. To sum up, contributions of this paper are two-fold: (1) We reveal concerned areas of human in emotion recognition. (2) We quantify the contribution of different facial parts.
2019-02-08
Ivanova, M., Durcheva, M., Baneres, D., Rodríguez, M. E..  2018.  eAssessment by Using a Trustworthy System in Blended and Online Institutions. 2018 17th International Conference on Information Technology Based Higher Education and Training (ITHET). :1-7.

eAssessment uses technology to support online evaluation of students' knowledge and skills. However, challenging problems must be addressed such as trustworthiness among students and teachers in blended and online settings. The TeSLA system proposes an innovative solution to guarantee correct authentication of students and to prove the authorship of their assessment tasks. Technologically, the system is based on the integration of five instruments: face recognition, voice recognition, keystroke dynamics, forensic analysis, and plagiarism. The paper aims to analyze and compare the results achieved after the second pilot performed in an online and a blended university revealing the realization of trust-driven solutions for eAssessment.

2020-12-07
Handa, A., Garg, P., Khare, V..  2018.  Masked Neural Style Transfer using Convolutional Neural Networks. 2018 International Conference on Recent Innovations in Electrical, Electronics Communication Engineering (ICRIEECE). :2099–2104.

In painting, humans can draw an interrelation between the style and the content of a given image in order to enhance visual experiences. Deep neural networks like convolutional neural networks are being used to draw a satisfying conclusion of this problem of neural style transfer due to their exceptional results in the key areas of visual perceptions such as object detection and face recognition.In this study, along with style transfer on whole image it is also outlined how transfer of style can be performed only on the specific parts of the content image which is accomplished by using masks. The style is transferred in a way that there is a least amount of loss to the content image i.e., semantics of the image is preserved.

2019-01-31
Grambow, Martin, Hasenburg, Jonathan, Bermbach, David.  2018.  Public Video Surveillance: Using the Fog to Increase Privacy. Proceedings of the 5th Workshop on Middleware and Applications for the Internet of Things. :11–14.

In public video surveillance, there is an inherent conflict between public safety goals and privacy needs of citizens. Generally, societies tend to decide on middleground solutions that sacrifice neither safety nor privacy goals completely. In this paper, we propose an alternative to existing approaches that rely on cloud-based video analysis. Our approach leverages the inherent geo-distribution of fog computing to preserve privacy of citizens while still supporting camera-based digital manhunts of law enforcement agencies.

2020-05-22
Despotovski, Filip, Gusev, Marjan, Zdraveski, Vladimir.  2018.  Parallel Implementation of K-Nearest-Neighbors for Face Recognition. 2018 26th Telecommunications Forum (℡FOR). :1—4.
Face recognition is a fast-expanding field of research. Countless classification algorithms have found use in face recognition, with more still being developed, searching for better performance and accuracy. For high-dimensional data such as images, the K-Nearest-Neighbours classifier is a tempting choice. However, it is very computationally-intensive, as it has to perform calculations on all items in the stored dataset for each classification it makes. Fortunately, there is a way to speed up the process by performing some of the calculations in parallel. We propose a parallel CUDA implementation of the KNN classifier and then compare it to a serial implementation to demonstrate its performance superiority.
2021-04-08
Sarma, M. S., Srinivas, Y., Abhiram, M., Ullala, L., Prasanthi, M. S., Rao, J. R..  2017.  Insider Threat Detection with Face Recognition and KNN User Classification. 2017 IEEE International Conference on Cloud Computing in Emerging Markets (CCEM). :39—44.
Information Security in cloud storage is a key trepidation with regards to Degree of Trust and Cloud Penetration. Cloud user community needs to ascertain performance and security via QoS. Numerous models have been proposed [2] [3] [6][7] to deal with security concerns. Detection and prevention of insider threats are concerns that also need to be tackled. Since the attacker is aware of sensitive information, threats due to cloud insider is a grave concern. In this paper, we have proposed an authentication mechanism, which performs authentication based on verifying facial features of the cloud user, in addition to username and password, thereby acting as two factor authentication. New QoS has been proposed which is capable of monitoring and detection of insider threats using Machine Learning Techniques. KNN Classification Algorithm has been used to classify users into legitimate, possibly legitimate, possibly not legitimate and not legitimate groups to verify image authenticity to conclude, whether there is any possible insider threat. A threat detection model has also been proposed for insider threats, which utilizes Facial recognition and Monitoring models. Security Method put forth in [6] [7] is honed to include threat detection QoS to earn higher degree of trust from cloud user community. As a recommendation, Threat detection module should be harnessed in private cloud deployments like Defense and Pharma applications. Experimentation has been conducted using open source Machine Learning libraries and results have been attached in this paper.
2018-09-12
Sachdeva, A., Kapoor, R., Sharma, A., Mishra, A..  2017.  Categorical Classification and Deletion of Spam Images on Smartphones Using Image Processing and Machine Learning. 2017 International Conference on Machine Learning and Data Science (MLDS). :23–30.

We regularly use communication apps like Facebook and WhatsApp on our smartphones, and the exchange of media, particularly images, has grown at an exponential rate. There are over 3 billion images shared every day on Whatsapp alone. In such a scenario, the management of images on a mobile device has become highly inefficient, and this leads to problems like low storage, manual deletion of images, disorganization etc. In this paper, we present a solution to tackle these issues by automatically classifying every image on a smartphone into a set of predefined categories, thereby segregating spam images from them, allowing the user to delete them seamlessly.