Visible to the public Biblio

Found 150 results

Filters: Keyword is face recognition  [Clear All Filters]
2020-06-26
Karthika, P., Babu, R. Ganesh, Nedumaran, A..  2019.  Machine Learning Security Allocation in IoT. 2019 International Conference on Intelligent Computing and Control Systems (ICCS). :474—478.

The progressed computational abilities of numerous asset compelled gadgets mobile phones have empowered different research zones including picture recovery from enormous information stores for various IoT applications. The real difficulties for picture recovery utilizing cell phones in an IoT situation are the computational intricacy and capacity. To manage enormous information in IoT condition for picture recovery a light-weighted profound learning base framework for vitality obliged gadgets. The framework initially recognizes and crop face areas from a picture utilizing Viola-Jones calculation with extra face classifier to take out the identification issue. Besides, the utilizes convolutional framework layers of a financially savvy pre-prepared CNN demonstrate with characterized highlights to speak to faces. Next, highlights of the huge information vault are listed to accomplish a quicker coordinating procedure for constant recovery. At long last, Euclidean separation is utilized to discover comparability among question and archive pictures. For exploratory assessment, we made a nearby facial pictures dataset it including equally single and gathering face pictures. In the dataset can be utilized by different specialists as a scale for examination with other ongoing facial picture recovery frameworks. The trial results demonstrate that our planned framework beats other cutting edge highlight extraction strategies as far as proficiency and recovery for IoT-helped vitality obliged stages.

Shengquan, Wang, Xianglong, Li, Ang, Li, Shenlong, Jiang.  2019.  Research on Iris Edge Detection Technology based on Daugman Algorithm. 2019 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :308—311.

In the current society, people pay more and more attention to identity security, especially in the case of some highly confidential or personal privacy, one-to-one identification is particularly important. The iris recognition just has the characteristics of high efficiency, not easy to be counterfeited, etc., which has been promoted as an identity technology. This paper has carried out research on daugman algorithm and iris edge detection.

2020-06-19
Ly, Son Thai, Do, Nhu-Tai, Lee, Guee-Sang, Kim, Soo-Hyung, Yang, Hyung-Jeong.  2019.  A 3d Face Modeling Approach for in-The-Wild Facial Expression Recognition on Image Datasets. 2019 IEEE International Conference on Image Processing (ICIP). :3492—3496.

This paper explores the benefits of 3D face modeling for in-the-wild facial expression recognition (FER). Since there is limited in-the-wild 3D FER dataset, we first construct 3D facial data from available 2D dataset using recent advances in 3D face reconstruction. The 3D facial geometry representation is then extracted by deep learning technique. In addition, we also take advantage of manipulating the 3D face, such as using 2D projected images of 3D face as additional input for FER. These features are then fused with that of 2D FER typical network. By doing so, despite using common approaches, we achieve a competent recognition accuracy on Real-World Affective Faces (RAF) database and Static Facial Expressions in the Wild (SFEW 2.0) compared with the state-of-the-art reports. To the best of our knowledge, this is the first time such a deep learning combination of 3D and 2D facial modalities is presented in the context of in-the-wild FER.

Saboor khan, Abdul, Shafi, Imran, Anas, Muhammad, Yousuf, Bilal M, Abbas, Muhammad Jamshed, Noor, Aqib.  2019.  Facial Expression Recognition using Discrete Cosine Transform Artificial Neural Network. 2019 22nd International Multitopic Conference (INMIC). :1—5.

Every so often Humans utilize non-verbal gestures (e.g. facial expressions) to express certain information or emotions. Moreover, countless face gestures are expressed throughout the day because of the capabilities possessed by humans. However, the channels of these expression/emotions can be through activities, postures, behaviors & facial expressions. Extensive research unveiled that there exists a strong relationship between the channels and emotions which has to be further investigated. An Automatic Facial Expression Recognition (AFER) framework has been proposed in this work that can predict or anticipate seven universal expressions. In order to evaluate the proposed approach, Frontal face Image Database also named as Japanese Female Facial Expression (JAFFE) is opted as input. This database is further processed with a frequency domain technique known as Discrete Cosine transform (DCT) and then classified using Artificial Neural Networks (ANN). So as to check the robustness of this novel strategy, the random trial of K-fold cross validation, leave one out and person independent methods is repeated many times to provide an overview of recognition rates. The experimental results demonstrate a promising performance of this application.

Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Facial Expression Recognition Using Merged Convolution Neural Network. 2019 IEEE 8th Global Conference on Consumer Electronics (GCCE). :296—298.

In this paper, a merged convolution neural network (MCNN) is proposed to improve the accuracy and robustness of real-time facial expression recognition (FER). Although there are many ways to improve the performance of facial expression recognition, a revamp of the training framework and image preprocessing renders better results in applications. When the camera is capturing images at high speed, however, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of human facial expression. To solve this problem, we propose a statistical method for recognition results obtained from previous images, instead of using the current recognition output. Experimental results show that the proposed method can satisfactorily recognize seven basic facial expressions in real time.

Keshari, Tanya, Palaniswamy, Suja.  2019.  Emotion Recognition Using Feature-level Fusion of Facial Expressions and Body Gestures. 2019 International Conference on Communication and Electronics Systems (ICCES). :1184—1189.

Automatic emotion recognition using computer vision is significant for many real-world applications like photojournalism, virtual reality, sign language recognition, and Human Robot Interaction (HRI) etc., Psychological research findings advocate that humans depend on the collective visual conduits of face and body to comprehend human emotional behaviour. Plethora of studies have been done to analyse human emotions using facial expressions, EEG signals and speech etc., Most of the work done was based on single modality. Our objective is to efficiently integrate emotions recognized from facial expressions and upper body pose of humans using images. Our work on bimodal emotion recognition provides the benefits of the accuracy of both the modalities.

Mundra, Saloni, Sujata, Mitra, Suman K..  2019.  Modular Facial Expression Recognition on Noisy Data Using Robust PCA. 2019 IEEE 16th India Council International Conference (INDICON). :1—4.
Chen, Yuedong, Wang, Jianfeng, Chen, Shikai, Shi, Zhongchao, Cai, Jianfei.  2019.  Facial Motion Prior Networks for Facial Expression Recognition. 2019 IEEE Visual Communications and Image Processing (VCIP). :1—4.

Deep learning based facial expression recognition (FER) has received a lot of attention in the past few years. Most of the existing deep learning based FER methods do not consider domain knowledge well, which thereby fail to extract representative features. In this work, we propose a novel FER framework, named Facial Motion Prior Networks (FMPN). Particularly, we introduce an addition branch to generate a facial mask so as to focus on facial muscle moving regions. To guide the facial mask learning, we propose to incorporate prior domain knowledge by using the average differences between neutral faces and the corresponding expressive faces as the training guidance. Extensive experiments on three facial expression benchmark datasets demonstrate the effectiveness of the proposed method, compared with the state-of-the-art approaches.

Yang, Jiannan, Zhang, Fan, Chen, Bike, Khan, Samee U..  2019.  Facial Expression Recognition Based on Facial Action Unit. 2019 Tenth International Green and Sustainable Computing Conference (IGSC). :1—6.

In the past few years, there has been increasing interest in the perception of human expressions and mental states by machines, and Facial Expression Recognition (FER) has attracted increasing attention. Facial Action Unit (AU) is an early proposed method to describe facial muscle movements, which can effectively reflect the changes in people's facial expressions. In this paper, we propose a high-performance facial expression recognition method based on facial action unit, which can run on low-configuration computer and realize video and real-time camera FER. Our method is mainly divided into two parts. In the first part, 68 facial landmarks and image Histograms of Oriented Gradients (HOG) are obtained, and the feature values of action units are calculated accordingly. The second part uses three classification methods to realize the mapping from AUs to FER. We have conducted many experiments on the popular human FER benchmark datasets (CK+ and Oulu CASIA) to demonstrate the effectiveness of our method.

2020-06-03
Amato, Giuseppe, Falchi, Fabrizio, Gennaro, Claudio, Massoli, Fabio Valerio, Passalis, Nikolaos, Tefas, Anastasios, Trivilini, Alessandro, Vairo, Claudio.  2019.  Face Verification and Recognition for Digital Forensics and Information Security. 2019 7th International Symposium on Digital Forensics and Security (ISDFS). :1—6.

In this paper, we present an extensive evaluation of face recognition and verification approaches performed by the European COST Action MULTI-modal Imaging of FOREnsic SciEnce Evidence (MULTI-FORESEE). The aim of the study is to evaluate various face recognition and verification methods, ranging from methods based on facial landmarks to state-of-the-art off-the-shelf pre-trained Convolutional Neural Networks (CNN), as well as CNN models directly trained for the task at hand. To fulfill this objective, we carefully designed and implemented a realistic data acquisition process, that corresponds to a typical face verification setup, and collected a challenging dataset to evaluate the real world performance of the aforementioned methods. Apart from verifying the effectiveness of deep learning approaches in a specific scenario, several important limitations are identified and discussed through the paper, providing valuable insight for future research directions in the field.

2020-05-22
Despotovski, Filip, Gusev, Marjan, Zdraveski, Vladimir.  2018.  Parallel Implementation of K-Nearest-Neighbors for Face Recognition. 2018 26th Telecommunications Forum (℡FOR). :1—4.
Face recognition is a fast-expanding field of research. Countless classification algorithms have found use in face recognition, with more still being developed, searching for better performance and accuracy. For high-dimensional data such as images, the K-Nearest-Neighbours classifier is a tempting choice. However, it is very computationally-intensive, as it has to perform calculations on all items in the stored dataset for each classification it makes. Fortunately, there is a way to speed up the process by performing some of the calculations in parallel. We propose a parallel CUDA implementation of the KNN classifier and then compare it to a serial implementation to demonstrate its performance superiority.
2020-05-11
singh, Kunal, Mathai, K. James.  2019.  Performance Comparison of Intrusion Detection System Between Deep Belief Network (DBN)Algorithm and State Preserving Extreme Learning Machine (SPELM) Algorithm. 2019 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–7.

This paper work is focused on Performance comparison of intrusion detection system between DBN Algorithm and SPELM Algorithm. Researchers have used this new algorithm SPELM to perform experiments in the area of face recognition, pedestrian detection, and for network intrusion detection in the area of cyber security. The scholar used the proposed State Preserving Extreme Learning Machine(SPELM) algorithm as machine learning classifier and compared it's performance with Deep Belief Network (DBN) algorithm using NSL KDD dataset. The NSL- KDD dataset has four lakhs of data record; out of which 40% of data were used for training purposes and 60% data used in testing purpose while calculating the performance of both the algorithms. The experiment as performed by the scholar compared the Accuracy, Precision, recall and Computational Time of existing DBN algorithm with proposed SPELM Algorithm. The findings have show better performance of SPELM; when compared its accuracy of 93.20% as against 52.8% of DBN algorithm;69.492 Precision of SPELM as against 66.836 DBN and 90.8 seconds of Computational time taken by SPELM as against 102 seconds DBN Algorithm.

2020-04-17
Daniel Albu, Răzvan, Gordan, Cornelia Emilia.  2019.  Authentication and Recognition, Guarantor for on-Line Security. 2019 15th International Conference on Engineering of Modern Electric Systems (EMES). :9—12.

ARGOS is a web service we implemented to offer face recognition Authentication Services (AaaS) to mobile and desktop (via the web browser) end users. The Authentication Services may be used by 3rd party service organizations to enhance their service offering to their customers. ARGOS implements a secure face recognition-based authentication service aiming to provide simple and intuitive tools for 3rd party service providers (like PayPal, banks, e-commerce etc) to replace passwords with face biometrics. It supports authentication from any device with 2D or 3D frontal facing camera (mobile phones, laptops, tablets etc.) and almost any operating systems (iOS, Android, Windows and Linux Ubuntu).

2020-04-13
Sanchez, Cristian, Martinez-Mosquera, Diana, Navarrete, Rosa.  2019.  Matlab Simulation of Algorithms for Face Detection in Video Surveillance. 2019 International Conference on Information Systems and Software Technologies (ICI2ST). :40–47.
Face detection is an application widely used in video surveillance systems and it is the first step for subsequent applications such as monitoring and recognition. For facial detection, there are a series of algorithms that allow the face to be extracted in a video image, among which are the Viola & Jones waterfall method and the method by geometric models using the Hausdorff distance. In this article, both algorithms are theoretically analyzed and the best one is determined by efficiency and resource optimization. Considering the most common problems in the detection of faces in a video surveillance system, such as the conditions of brightness and the angle of rotation of the face, tests have been carried out in 13 different scenarios with the best theoretically analyzed algorithm and its combination with another algorithm The images obtained, using a digital camera in the 13 scenarios, have been analyzed using Matlab code of the Viola & Jones and Viola & Jones algorithm combined with the Kanade-Lucas-Tomasi algorithm to add the feature of completing the tracking of a single object. This paper presents the detection percentages, false positives and false negatives for each image and for each simulation code, resulting in the scenarios with the most detection problems and the most accurate algorithm in face detection.
2020-04-06
Ahmed, Syed Umaid, Sabir, Arbaz, Ashraf, Talha, Ashraf, Usama, Sabir, Shahbaz, Qureshi, Usama.  2019.  Security Lock with Effective Verification Traits. 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE). :164–169.
To manage and handle the issues of physical security in the modern world, there is a dire need for a multilevel security system to ensure the safety of precious belongings that could be money, military equipment or medical life-saving drugs. Security locker solution is proposed which is a multiple layer security system consisting of various levels of authentication. In most cases, only relevant persons should have access to their precious belongings. The unlocking of the box is only possible when all of the security levels are successfully cleared. The five levels of security include entering of password on interactive GUI, thumbprint, facial recognition, speech pattern recognition, and vein pattern recognition. This project is unique and effective in a sense that it incorporates five levels of security in a single prototype with the use of cost-effective equipment. Assessing our security system, it is seen that security is increased many a fold as it is near to impossible to breach all these five levels of security. The Raspberry Pi microcomputers, handling all the traits efficiently and smartly makes it easy for performing all the verification tasks. The traits used involves checking, training and verifying processes with application of machine learning operations.
2020-03-02
Ibrokhimov, Sanjar, Hui, Kueh Lee, Abdulhakim Al-Absi, Ahmed, lee, hoon jae, Sain, Mangal.  2019.  Multi-Factor Authentication in Cyber Physical System: A State of Art Survey. 2019 21st International Conference on Advanced Communication Technology (ICACT). :279–284.
Digital Multifactor authentication is one of the best ways to make secure authentication. It covers many different areas of a Cyber-connected world, including online payments, communications, access right management, etc. Most of the time, Multifactor authentication is little complex as it require extra step from users. With two-factor authentication, along with the user-ID and password, user also needs to enter a special code which they normally receive by short message service or some special code which they got in advance. This paper will discuss the evolution from single authentication to Multi-Factor Authentication (MFA) starting from Single-Factor Authentication (SFA) and through Two-Factor Authentication (2FA). In addition, this paper presents five high-level categories of features of user authentication in the gadget-free world including security, privacy, and usability aspects. These are adapted and extended from earlier research on web authentication methods. In conclusion, this paper gives future research directions and open problems that stem from our observations.
2020-02-10
Mowla, Nishat I, Doh, Inshil, Chae, Kijoon.  2019.  Binarized Multi-Factor Cognitive Detection of Bio-Modality Spoofing in Fog Based Medical Cyber-Physical System. 2019 International Conference on Information Networking (ICOIN). :43–48.
Bio-modalities are ideal for user authentication in Medical Cyber-Physical Systems. Various forms of bio-modalities, such as the face, iris, fingerprint, are commonly used for secure user authentication. Concurrently, various spoofing approaches have also been developed over time which can fail traditional bio-modality detection systems. Image synthesis with play-doh, gelatin, ecoflex etc. are some of the ways used in spoofing bio-identifiable property. Since the bio-modality detection sensors are small and resource constrained, heavy-weight detection mechanisms are not suitable for these sensors. Recently, Fog based architectures are proposed to support sensor management in the Medical Cyber-Physical Systems (MCPS). A thin software client running in these resource-constrained sensors can enable communication with fog nodes for better management and analysis. Therefore, we propose a fog-based security application to detect bio-modality spoofing in a Fog based MCPS. In this regard, we propose a machine learning based security algorithm run as an application at the fog node using a binarized multi-factor boosted ensemble learner algorithm coupled with feature selection. Our proposal is verified on real datasets provided by the Replay Attack, Warsaw and LiveDet 2015 Crossmatch benchmark for face, iris and fingerprint modality spoofing detection used for authentication in an MCPS. The experimental analysis shows that our approach achieves significant performance gain over the state-of-the-art approaches.
2019-12-30
Toliupa, Serhiy, Tereikovskiy, Ihor, Dychka, Ivan, Tereikovska, Liudmyla, Trush, Alexander.  2019.  The Method of Using Production Rules in Neural Network Recognition of Emotions by Facial Geometry. 2019 3rd International Conference on Advanced Information and Communications Technologies (AICT). :323–327.
The article is devoted to the improvement of neural network means of recognition of emotions on human geometry, which are defined for use in information systems of general purpose. It is shown that modern means of emotional recognition are based on the usual networks of critical disadvantage, because there is a lack of accuracy of recognition under the influence of purchased, characteristic of general-purpose information systems. It is determined that the above remarks relate to the turning of the face and the size of the image. A typical approach to overcoming this disadvantage through training is unacceptable for all protection options that are inappropriate for reasons of duration and compilation of the required training sample. It is proposed to increase the accuracy of recognition by submitting an expert data model to the neural network. An appropriate method for representing expert knowledge is developed. A feature of the method is the use of productive rules and the PNN neural network. Experimental verification of the developed solutions has been carried out. The obtained results allow to increase the efficiency of the termination and disclosure of the set of age networks, the characteristics of which are not presented in the registered statistical data.
Liu, Keng-Cheng, Hsu, Chen-Chien, Wang, Wei-Yen, Chiang, Hsin-Han.  2019.  Real-Time Facial Expression Recognition Based on CNN. 2019 International Conference on System Science and Engineering (ICSSE). :120–123.
In this paper, we propose a method for improving the robustness of real-time facial expression recognition. Although there are many ways to improve the accuracy of facial expression recognition, a revamp of the training framework and image preprocessing allow better results in applications. One existing problem is that when the camera is capturing images in high speed, changes in image characteristics may occur at certain moments due to the influence of light and other factors. Such changes can result in incorrect recognition of the human facial expression. To solve this problem for smooth system operation and maintenance of recognition speed, we take changes in image characteristics at high speed capturing into account. The proposed method does not use the immediate output for reference, but refers to the previous image for averaging to facilitate recognition. In this way, we are able to reduce interference by the characteristics of the images. The experimental results show that after adopting this method, overall robustness and accuracy of facial expression recognition have been greatly improved compared to those obtained by only the convolution neural network (CNN).
Taha, Bilal, Hatzinakos, Dimitrios.  2019.  Emotion Recognition from 2D Facial Expressions. 2019 IEEE Canadian Conference of Electrical and Computer Engineering (CCECE). :1–4.
This work proposes an approach to find and learn informative representations from 2 dimensional gray-level images for facial expression recognition application. The learned features are obtained from a designed convolutional neural network (CNN). The developed CNN enables us to learn features from the images in a highly efficient manner by cascading different layers together. The developed model is computationally efficient since it does not consist of a huge number of layers and at the same time it takes into consideration the overfitting problem. The outcomes from the developed CNN are compared to handcrafted features that span texture and shape features. The experiments conducted on the Bosphours database show that the developed CNN model outperforms the handcrafted features when coupled with a Support Vector Machines (SVM) classifier.
Lian, Zheng, Li, Ya, Tao, Jianhua, Huang, Jian, Niu, Mingyue.  2018.  Region Based Robust Facial Expression Analysis. 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). :1–5.
Facial emotion recognition is an essential aspect in human-machine interaction. In the real-world conditions, it faces many challenges, i.e., illumination changes, large pose variations and partial or full occlusions, which cause different facial areas with different sharpness and completeness. Inspired by this fact, we focus on facial expression recognition based on partial faces in this paper. We compare contribution of seven facial areas of low-resolution images, including nose areas, mouse areas, eyes areas, nose to mouse areas, nose to eyes areas, mouth to eyes areas and the whole face areas. Through analysis on the confusion matrix and the class activation map, we find that mouth regions contain much emotional information compared with nose areas and eyes areas. In the meantime, considering larger facial areas is helpful to judge the expression more precisely. To sum up, contributions of this paper are two-fold: (1) We reveal concerned areas of human in emotion recognition. (2) We quantify the contribution of different facial parts.
Kim, Sunbin, Kim, Hyeoncheol.  2019.  Deep Explanation Model for Facial Expression Recognition Through Facial Action Coding Unit. 2019 IEEE International Conference on Big Data and Smart Computing (BigComp). :1–4.
Facial expression is the most powerful and natural non-verbal emotional communication method. Facial Expression Recognition(FER) has significance in machine learning tasks. Deep Learning models perform well in FER tasks, but it doesn't provide any justification for its decisions. Based on the hypothesis that facial expression is a combination of facial muscle movements, we find that Facial Action Coding Units(AUs) and Emotion label have a relationship in CK+ Dataset. In this paper, we propose a model which utilises AUs to explain Convolutional Neural Network(CNN) model's classification results. The CNN model is trained with CK+ Dataset and classifies emotion based on extracted features. Explanation model classifies the multiple AUs with the extracted features and emotion classes from the CNN model. Our experiment shows that with only features and emotion classes obtained from the CNN model, Explanation model generates AUs very well.
2019-02-08
Ivanova, M., Durcheva, M., Baneres, D., Rodríguez, M. E..  2018.  eAssessment by Using a Trustworthy System in Blended and Online Institutions. 2018 17th International Conference on Information Technology Based Higher Education and Training (ITHET). :1-7.

eAssessment uses technology to support online evaluation of students' knowledge and skills. However, challenging problems must be addressed such as trustworthiness among students and teachers in blended and online settings. The TeSLA system proposes an innovative solution to guarantee correct authentication of students and to prove the authorship of their assessment tasks. Technologically, the system is based on the integration of five instruments: face recognition, voice recognition, keystroke dynamics, forensic analysis, and plagiarism. The paper aims to analyze and compare the results achieved after the second pilot performed in an online and a blended university revealing the realization of trust-driven solutions for eAssessment.

2019-01-31
Grambow, Martin, Hasenburg, Jonathan, Bermbach, David.  2018.  Public Video Surveillance: Using the Fog to Increase Privacy. Proceedings of the 5th Workshop on Middleware and Applications for the Internet of Things. :11–14.

In public video surveillance, there is an inherent conflict between public safety goals and privacy needs of citizens. Generally, societies tend to decide on middleground solutions that sacrifice neither safety nor privacy goals completely. In this paper, we propose an alternative to existing approaches that rely on cloud-based video analysis. Our approach leverages the inherent geo-distribution of fog computing to preserve privacy of citizens while still supporting camera-based digital manhunts of law enforcement agencies.

2018-09-12
Sachdeva, A., Kapoor, R., Sharma, A., Mishra, A..  2017.  Categorical Classification and Deletion of Spam Images on Smartphones Using Image Processing and Machine Learning. 2017 International Conference on Machine Learning and Data Science (MLDS). :23–30.

We regularly use communication apps like Facebook and WhatsApp on our smartphones, and the exchange of media, particularly images, has grown at an exponential rate. There are over 3 billion images shared every day on Whatsapp alone. In such a scenario, the management of images on a mobile device has become highly inefficient, and this leads to problems like low storage, manual deletion of images, disorganization etc. In this paper, we present a solution to tackle these issues by automatically classifying every image on a smartphone into a set of predefined categories, thereby segregating spam images from them, allowing the user to delete them seamlessly.