Visible to the public Facial Recognition

SoS Newsletter- Advanced Book Block

Facial Recognition


Facial recognition tools have long been the stuff of action-adventure films. In the real world, they present opportunities and complex problems being examined by researchers. The works cited here, presented or published in the first three quarters of 2014, address various techniques and issues such as the use of TDM, PCA and Markov models, application of keystroke dynamics to facial thermography, multiresolution alignment, and sparse representation.

  • Henderson, G.; Ellefsen, I, "Applying Keystroke Dynamics Techniques to Facial Thermography for Verification," IST-Africa Conference Proceedings, 2014, pp.1, 10, 7-9 May 2014. doi: 10.1109/ISTAFRICA.2014.6880626 The problem of verifying that the person accessing a system is the same person that was authorized to do so has existed for many years. Some of the solutions that have been developed to address this problem include continuous Facial Recognition and Keystroke Dynamics. Each of these has their own inherent flaws. We will propose an approach that makes use of Facial Recognition and Keystroke Dynamic techniques and applies them to Facial Thermography. The mechanisms required to implement this new technique are discussed, as well as the trade-offs between the proposed approach and the existing techniques. This will be followed by a discussion on some of the strengths and weaknesses of the proposed approach that need to be considered before the system should be considered for an organization. Keywords: authorisation; face recognition; infrared imaging; continuous facial recognition; facial thermography; keystroke dynamic techniques; person authorization; person verification; Accuracy; Cameras; Face; Face recognition; Fingerprint recognition; Security; Standards; Facial Recognition; Facial Thermography; Keystroke Dynamics; Temperature Digraphs; Verification (ID#:14-2872) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880626&isnumber=6880588
  • Meher, S.S.; Maben, P., "Face Recognition And Facial Expression Identification Using PCA," Advance Computing Conference (IACC), 2014 IEEE International, pp.1093,1098, 21-22 Feb. 2014 doi: 10.1109/IAdCC.2014.6779478 The face being the primary focus of attention in social interaction plays a major role in conveying identity and emotion. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. The main aim of this paper is to analyse the method of Principal Component Analysis (PCA) and its performance when applied to face recognition. This algorithm creates a subspace (face space) where the faces in a database are represented using a reduced number of features called feature vectors. The PCA technique has also been used to identify various facial expressions such as happy, sad, neutral, anger, disgust, fear etc. Experimental results that follow show that PCA based methods provide better face recognition with reasonably low error rates. From the paper, we conclude that PCA is a good technique for face recognition as it is able to identify faces fairly well with varying illuminations, facial expressions etc. Keywords: emotion recognition; face recognition; principal component analysis; vectors; video signal processing; PCA; database; digital image; error rates; face recognition; facial expression identification; facial recognition system; feature vectors; person identification; person verification; principal component analysis; social interaction; video frame; Conferences; Erbium; Eigen faces; Face recognition; Principal Component Analysis (PCA) (ID#:14-2873) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779478&isnumber=6779283
  • Vijayalakshmi, M.; Senthil, T., "Automatic Human Facial Expression Recognition Using Hidden Markov Model," Electronics and Communication Systems (ICECS), 2014 International Conference on, pp.1,5, 13-14 Feb. 2014. doi: 10.1109/ECS.2014.6892800 Facial Recognition is a type of biometric software application that can identify a specific individual in a digital image by analyzing and comparing patterns. These systems are commonly used for the security purposes but are increasingly being used in a variety of other applications such as residential security, voter verification, banking using ATM. Changes in facial expression become a difficult task in recognizing faces. In this paper continuous naturalistic affective expressions will be recognized using Hidden Markov Model (HMM) framework. Active Appearance Model (AAM) landmarks are considered for each frame of the videos. The AAMs were used to track the face and extract its visual features. There are six different facial expressions considered over here: Happy, Sadness, Anger, Fear, Surprise, Disgust, Fear and Sad. Different Expression recognition problem is solved through a multistage automatic pattern recognition system where the temporal relationships are modeled through the HMM framework. Dimension levels (i.e., labels) can be defined as the hidden states sequences in the HMM framework. Then the probabilities of these hidden states and their state transitions can be accurately computed from the labels of the training set. Through a three stage classification approach, the output of a first-stage classification is used as observation sequences for a second stage classification, modeled as a HMM-based framework. The k-NN will be used for the first stage classification. A third classification stage, a decision fusion tool, is then used to boost overall performance. Keywords: biometrics (access control);face recognition; hidden Markov models; AAM landmarks; ATM; HMM framework; Hidden Markov Model; active appearance model; automatic human facial expression recognition; banking; biometric software application; digital image; hidden states; residential security; state transitions; voter verification; Active appearance model; Computational modeling; Face recognition; Hidden Markov models; Speech; Speech recognition; Support vector machine classification; Active Appearance Model (AAM);Dimension levels; Hidden Markov model (HMM); K Nearest Neighbor (k-NN) (ID#:14-2874) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892800&isnumber=6892507
  • Chehata, Ramy C.G.; Mikhael, Wasfy B.; Atia, George, "A Transform Domain Modular Approach For Facial Recognition Using Different Representations And Windowing Techniques," Circuits and Systems (MWSCAS), 2014 IEEE 57th International Midwest Symposium on, pp.817,820, 3-6 Aug. 2014. doi: 10.1109/MWSCAS.2014.6908540 A face recognition algorithm based on a newly developed Transform Domain Modular (TDM) approach is proposed. In this approach, the spatial faces are divided into smaller sub-images, which are processed using non-overlapping and overlapping windows. Each image is subsequently transformed using a compressing transform such as the two dimensional discrete cosine transform. This produces the TDM-2D and the TDM-Dia based on two-dimensional and diagonal representations of the data, respectively. The performance of this approach for facial image recognition is compared with the state of the art successful techniques. The test results, for noise free and noisy images, yield higher than 97.5% recognition accuracy. The improved recognition accuracy is achieved while retaining comparable or better computation complexity and storage savings. Keywords: Face; Face recognition; Principal component analysis; Testing; Time division multiplexing; Training; Transforms (ID#:14-2875) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908540&isnumber=6908326
  • Aldhahab, Ahmed; Atia, George; Mikhael, Wasfy B., "Supervised Facial Recognition Based On Multi-Resolution Analysis And Feature Alignment," Circuits and Systems (MWSCAS), 2014 IEEE 57th International Midwest Symposium on, pp.137,140, 3-6 Aug. 2014. doi: 10.1109/MWSCAS.2014.6908371 A new supervised algorithm for face recognition based on the integration of Two-Dimensional Discrete Multiwavelet Transform (2-D DMWT), 2-D Radon Transform, and 2-D Discrete Wavelet Transform (2-D DWT) is proposed1. In the feature extraction step, Multiwavelet filter banks are used to extract useful information from the face images. The extracted information is then aligned using the Radon Transform, and localized into a single band using 2-D DWT for efficient sparse data representation. This information is fed into a Neural Network based classifier for training and testing. The proposed method is tested on three different databases, namely, ORL, YALE and subset fc of FERET, which comprise different poses and lighting conditions. It is shown that this approach can significantly improve the classification performance and the storage requirements of the overall recognition system. Keywords: Classification algorithms; Databases; Discrete wavelet transforms; Feature extraction; Multiresolution analysis; Training (ID#:14-2876) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908371&isnumber=6908326
  • Zhen Gao; Shangfei Wang; Chongliang Wu; Jun Wang; Qiang Ji, "Facial Action Unit Recognition By Relation Modeling From Both Qualitative Knowledge And Quantitative Data," Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on, pp.1,6, 14-18 July 2014. doi: 10.1109/ICMEW.2014.6890672 In this paper, we propose to capture Action Unit (AU) relations existing in both qualitative knowledge and quantitative data through Credal Networks (CN). Each node of the CN represents an AU label, and the links and probability intervals capture the probabilistic dependencies among multiple AUs. The structure of CN is designed based on prior knowledge. The parameters of CN are learned from both knowledge and ground-truth AU labels. The AU preliminary estimations are obtained by an existing image-driven recognition method. With the learned credal network, we infer the true AU labels by combining the relationships among labels with the previous obtained estimations. Experimental results on the CK+ database and MMI database demonstrate that with complete AU labels, our CN model is slightly better than the Bayesian Network (BN) model, demonstrating that credal sets learned from data can capture uncertainty more reliably; With incomplete and error-prone AU annotations, our CN model outperforms the BN model, indicating that credal sets can successfully capture qualitative knowledge. Keywords: face recognition; image sequences; probability; uncertainty handling; visual databases; AU label; AU preliminary estimation; BN model; CK+ database; N model; MMI database; credal network; error prone AU annotation; facial action unit recognition; image driven recognition method; incomplete AU annotation; probabilistic dependency; probability interval; relation modeling; uncertainty handling; Data models; Databases; Gold; Hidden Markov models; Image recognition; Mathematical model; Support vector machines; AU recognition; credal network; prior knowledge (ID#:14-2877) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890672&isnumber=6890528
  • Leventic, H.; Livada, C.; Gaclic, I, "Towards Fixed Facial Features Face Recognition," Systems, Signals and Image Processing (IWSSIP), 2014 International Conference on, pp.267,270, 12-15 May 2014 Abstract: In this paper we propose a framework for recognition of faces in controlled conditions. The framework consists of two parts: face detection and face recognition. For face detection we are using the Viola-Jones face detector. The proposal for face recognition part is based on the calculation of certain ratios on the face, where the features on the face are located by the use of Hough transform for circles. Experiments show that this framework presents a possible solution for the problem of face recognition. Keywords: Hough transforms; face recognition; Hough transform; Viola-Jones face detector; face detection; face recognition; fixed facial feature; Equations; Face; Face recognition; Nose; Transforms; Hough transform; Viola-Jones; face detection; face recognition (ID#:14-2878) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837682&isnumber=6837609
  • Wilber, M.J.; Rudd, E.; Heflin, B.; Yui-Man Lui; Boult, T.E., "Exemplar Codes For Facial Attributes And Tattoo Recognition," Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pp.205,212, 24-26 March 2014. doi: 10.1109/WACV.2014.6836099 When implementing real-world computer vision systems, researchers can use mid-level representations as a tool to adjust the trade-off between accuracy and efficiency. Unfortunately, existing mid-level representations that improve accuracy tend to decrease efficiency, or are specifically tailored to work well within one pipeline or vision problem at the exclusion of others. We introduce a novel, efficient mid-level representation that improves classification efficiency without sacrificing accuracy. Our Exemplar Codes are based on linear classifiers and probability normalization from extreme value theory. We apply Exemplar Codes to two problems: facial attribute extraction and tattoo classification. In these settings, our Exemplar Codes are competitive with the state of the art and offer efficiency benefits, making it possible to achieve high accuracy even on commodity hardware with a low computational budget. Keywords: computer vision; face recognition; feature extraction; image classification; image representation; probability; classification efficiency; exemplar codes; extreme value theory; facial attribute extraction; linear classifiers; mid-level representations; probability normalization; real-world computer vision systems; tattoo classification; tattoo recognition; Accuracy; Face; Feature extraction; Libraries; Pipelines; Support vector machines; Training (ID#:14-2879) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836099&isnumber=6835728
  • Hehua Chi; Yu Hen Hu, "Facial Image De-Identification Using Identity Subspace Decomposition," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.524,528, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6853651 How to conceal the identity of a human face without covering the facial image? This is the question investigated in this work. Leveraging the high dimensional feature representation of a human face in an Active Appearance Model (AAM), a novel method called the identity subspace decomposition (ISD) method is proposed. Using ISD, the AAM feature space is deposed into an identity sensitive subspace and an identity insensitive subspace. By replacing the feature values in the identity sensitive subspace with the averaged values of k individuals, one may realize a k-anonymity de-identification process on facial images. We developed a heuristic approach to empirically select the AAM features corresponding to the identity sensitive subspace. We showed that after applying k-anonymity de-identification to AAM features in the identity sensitive subspace, the resulting facial images can no longer be distinguished by either human eyes or facial recognition algorithms. Keywords: face recognition; AAM feature space; ISD; active appearance model; facial image de-identification; facial recognition algorithms; high dimensional feature representation; human eye recognition algorithms; identity subspace decomposition method; identiy subspace decomposition; k-anonymity de-identification process; sensitive subspace; Active appearance model; Databases; Face; Face recognition; Facial features; Privacy; Vectors; active appearance model; data privacy; face recognition ;identification of persons (ID#:14-2880) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853651&isnumber=6853544
  • Ptucha, R.; Savakis, AE., "LGE-KSVD: Robust Sparse Representation Classification," Image Processing, IEEE Transactions on, vol.23, no.4, pp.1737, 1750, April 2014. doi: 10.1109/TIP.2014.2303648 The parsimonious nature of sparse representations has been successfully exploited for the development of highly accurate classifiers for various scientific applications. Despite the successes of Sparse Representation techniques, a large number of dictionary atoms as well as the high dimensionality of the data can make these classifiers computationally demanding. Furthermore, sparse classifiers are subject to the adverse effects of a phenomenon known as coefficient contamination, where, for example, variations in pose may affect identity and expression recognition. We analyze the interaction between dimensionality reduction and sparse representations, and propose a technique, called Linear extension of Graph Embedding K-means-based Singular Value Decomposition (LGE-KSVD) to address both issues of computational intensity and coefficient contamination. In particular, the LGE-KSVD utilizes variants of the LGE to optimize the K-SVD, an iterative technique for small yet over complete dictionary learning. The dimensionality reduction matrix, sparse representation dictionary, sparse coefficients, and sparsity-based classifier are jointly learned through the LGE-KSVD. The atom optimization process is redefined to allow variable support using graph embedding techniques and produce a more flexible and elegant dictionary learning algorithm. Results are presented on a wide variety of facial and activity recognition problems that demonstrate the robustness of the proposed method. Keywords: dictionaries; image representation; iterative methods; optimisation; singular value decomposition; LGE-KSVD; activity recognition problems; atom optimization process; coefficient contamination; computational intensity; dictionary learning algorithm; dimensionality reduction matrix; expression recognition; facial recognition problems; graph embedding techniques; terative technique; linear extension of graph embedding k-means-based singular value decomposition; robust sparse representation classification ;sparse coefficients; sparse representation dictionary; sparsity-based classifier; Contamination; Dictionaries; Image reconstruction; Manifolds; Principal component analysis; Sparse matrices; Training; Dimensionality reduction; activity recognition; facial analysis; manifold learning; sparse representation (ID#:14-2881) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6728639&isnumber=6742656
  • Bong-Nam Kang; Jongmin Yoon; Hyunsung Park; Daijin Kim, "Face Recognition Using Affine Dense SURF-Like Descriptors," Consumer Electronics (ICCE), 2014 IEEE International Conference on, pp.129,130, 10-13 Jan. 2014. doi: 10.1109/ICCE.2014.6775938 In this paper, we propose the method for pose and facial expression invariant face recognition using the affine dense SURF-like descriptors. The proposed method consists of four step, 1) we normalize the face image using the face and eye detector. 2) We apply the affine simulation for synthesizing various pose face images. 3) We make a descriptor on the overlapping block-based grid keypoints. 4) A probe image is compared with the referenced images by performing the nearest neighbor matching. To improve the recognition rate, we use the keypoint distance ratio and false matched keypoint ratio. The proposed method showed the better performance than that of the conventional methods in terms of the recognition rates. Keywords: face recognition; probes; affine dense SURF-like descriptors; eye detector; face detector ;facial expression invariant face recognition; false matched keypoint ratio; keypoint distance ratio; nearest neighbor matching; overlapping block-based grid keypoints; pose face images; probe image; recognition rate; recognition rates; Computer vision; Conferences; Educational institutions; Face; Face recognition; Probes; Vectors (ID#:14-2882) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6775938&isnumber=6775879

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.