Visible to the public Biblio

Found 150 results

Filters: Keyword is face recognition  [Clear All Filters]
2023-07-21
R, Sowmiya, G, Sivakamasundari, V, Archana.  2022.  Facial Emotion Recognition using Deep Learning Approach. 2022 International Conference on Automation, Computing and Renewable Systems (ICACRS). :1064—1069.
Human facial emotion recognition pays a variety of applications in society. The basic idea of Facial Emotion Recognition is to map the different facial emotions to a variety of emotional states. Conventional Facial Emotion Recognition consists of two processes: extracting the features and feature selection. Nowadays, in deep learning algorithms, Convolutional Neural Networks are primarily used in Facial Emotion Recognition because of their hidden feature extraction from the images. Usually, the standard Convolutional Neural Network has simple learning algorithms with finite feature extraction layers for extracting information. The drawback of the earlier approach was that they validated only the frontal view of the photos even though the image was obtained from different angles. This research work uses a deep Convolutional Neural Network along with a DenseNet-169 as a backbone network for recognizing facial emotions. The emotion Recognition dataset was used to recognize the emotions with an accuracy of 96%.
Giri, Sarwesh, Singh, Gurchetan, Kumar, Babul, Singh, Mehakpreet, Vashisht, Deepanker, Sharma, Sonu, Jain, Prince.  2022.  Emotion Detection with Facial Feature Recognition Using CNN & OpenCV. 2022 2nd International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE). :230—232.
Emotion Detection through Facial feature recognition is an active domain of research in the field of human-computer interaction (HCI). Humans are able to share multiple emotions and feelings through their facial gestures and body language. In this project, in order to detect the live emotions from the human facial gesture, we will be using an algorithm that allows the computer to automatically detect the facial recognition of human emotions with the help of Convolution Neural Network (CNN) and OpenCV. Ultimately, Emotion Detection is an integration of obtained information from multiple patterns. If computers will be able to understand more of human emotions, then it will mutually reduce the gap between humans and computers. In this research paper, we will demonstrate an effective way to detect emotions like neutral, happy, sad, surprise, angry, fear, and disgust from the frontal facial expression of the human in front of the live webcam.
Udeh, Chinonso Paschal, Chen, Luefeng, Du, Sheng, Li, Min, Wu, Min.  2022.  A Co-regularization Facial Emotion Recognition Based on Multi-Task Facial Action Unit Recognition. 2022 41st Chinese Control Conference (CCC). :6806—6810.
Facial emotion recognition helps feed the growth of the future artificial intelligence with the development of emotion recognition, learning, and analysis of different angles of a human face and head pose. The world's recent pandemic gave rise to the rapid installment of facial recognition for fewer applications, while emotion recognition is still within the experimental boundaries. The current challenges encountered with facial emotion recognition (FER) are the difference between background noises. Since today's world shows us that humans soon need robotics in the most significant role of human perception, attention, memory, decision-making, and human-robot interaction (HRI) needs employees. By merging the head pose as a combination towards the FER to boost the robustness in understanding emotions using the convolutional neural networks (CNN). The stochastic gradient descent with a comprehensive model is adopted by applying multi-task learning capable of implicit parallelism, inherent and better global optimizer in finding better network weights. After executing a multi-task learning model using two independent datasets, the experiment with the FER and head pose learning multi-views co-regularization frameworks were subsequently merged with validation accuracy.
Sivasangari, A., Gomathi, R. M., Anandhi, T., Roobini, Roobini, Ajitha, P..  2022.  Facial Recognition System using Decision Tree Algorithm. 2022 3rd International Conference on Electronics and Sustainable Communication Systems (ICESC). :1542—1546.
Face recognition technology is widely employed in a variety of applications, including public security, criminal identification, multimedia data management, and so on. Because of its importance for practical applications and theoretical issues, the facial recognition system has received a lot of attention. Furthermore, numerous strategies have been offered, each of which has shown to be a significant benefit in the field of facial and pattern recognition systems. Face recognition still faces substantial hurdles in unrestricted situations, despite these advancements. Deep learning techniques for facial recognition are presented in this paper for accurate detection and identification of facial images. The primary goal of facial recognition is to recognize and validate facial features. The database consists of 500 color images of people that have been pre-processed and features extracted using Linear Discriminant Analysis. These features are split into 70 percent for training and 30 percent for testing of decision tree classifiers for the computation of face recognition system performance.
Sadikoğlu, Fahreddin M., Idle Mohamed, Mohamed.  2022.  Facial Expression Recognition Using CNN. 2022 International Conference on Artificial Intelligence in Everything (AIE). :95—99.
Facial is the most dynamic part of the human body that conveys information about emotions. The level of diversity in facial geometry and facial look makes it possible to detect various human expressions. To be able to differentiate among numerous facial expressions of emotion, it is crucial to identify the classes of facial expressions. The methodology used in this article is based on convolutional neural networks (CNN). In this paper Deep Learning CNN is used to examine Alex net architectures. Improvements were achieved by applying the transfer learning approach and modifying the fully connected layer with the Support Vector Machine(SVM) classifier. The system succeeded by achieving satisfactory results on icv-the MEFED dataset. Improved models achieved around 64.29 %of recognition rates for the classification of the selected expressions. The results obtained are acceptable and comparable to the relevant systems in the literature provide ideas a background for further improvements.
Shiomi, Takanori, Nomiya, Hiroki, Hochin, Teruhisa.  2022.  Facial Expression Intensity Estimation Considering Change Characteristic of Facial Feature Values for Each Facial Expression. 2022 23rd ACIS International Summer Virtual Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD-Summer). :15—21.
Facial expression intensity, which quantifies the degree of facial expression, has been proposed. It is calculated based on how much facial feature values change compared to an expressionless face. The estimation has two aspects. One is to classify facial expressions, and the other is to estimate their intensity. However, it is difficult to do them at the same time. There- fore, in this work, the estimation of intensity and the classification of expression are separated. We suggest an explicit method and an implicit method. In the explicit one, a classifier determines which types of expression the inputs are, and each regressor determines its intensity. On the other hand, in the implicit one, we give zero values or non-zero values to regressors for each type of facial expression as ground truth, depending on whether or not an input image is the correct facial expression. We evaluated the two methods and, as a result, found that they are effective for facial expression recognition.
Lee, Gwo-Chuan, Li, Zi-Yang, Li, Tsai-Wei.  2022.  Ensemble Algorithm of Convolution Neural Networks for Enhancing Facial Expression Recognition. 2022 IEEE 5th International Conference on Knowledge Innovation and Invention (ICKII ). :111—115.
Artificial intelligence (AI) cooperates with multiple industries to improve the overall industry framework. Especially, human emotion recognition plays an indispensable role in supporting medical care, psychological counseling, crime prevention and detection, and crime investigation. The research on emotion recognition includes emotion-specific intonation patterns, literal expressions of emotions, and facial expressions. Recently, the deep learning model of facial emotion recognition aims to capture tiny changes in facial muscles to provide greater recognition accuracy. Hybrid models in facial expression recognition have been constantly proposed to improve the performance of deep learning models in these years. In this study, we proposed an ensemble learning algorithm for the accuracy of the facial emotion recognition model with three deep learning models: VGG16, InceptionResNetV2, and EfficientNetB0. To enhance the performance of these benchmark models, we applied transfer learning, fine-tuning, and data augmentation to implement the training and validation of the Facial Expression Recognition 2013 (FER-2013) Dataset. The developed algorithm finds the best-predicted value by prioritizing the InceptionResNetV2. The experimental results show that the proposed ensemble learning algorithm of priorities edges up 2.81% accuracy of the model identification. The future extension of this study ventures into the Internet of Things (IoT), medical care, and crime detection and prevention.
Churaev, Egor, Savchenko, Andrey V..  2022.  Multi-user facial emotion recognition in video based on user-dependent neural network adaptation. 2022 VIII International Conference on Information Technology and Nanotechnology (ITNT). :1—5.
In this paper, the multi-user video-based facial emotion recognition is examined in the presence of a small data set with the emotions of end users. By using the idea of speaker-dependent speech recognition, we propose a novel approach to solve this task if labeled video data from end users is available. During the training stage, a deep convolutional neural network is trained for user-independent emotion classification. Next, this classifier is adapted (fine-tuned) on the emotional video of a concrete person. During the recognition stage, the user is identified based on face recognition techniques, and an emotional model of the recognized user is applied. It is experimentally shown that this approach improves the accuracy of emotion recognition by more than 20% for the RAVDESS dataset.
Avula, Himaja, R, Ranjith, S Pillai, Anju.  2022.  CNN based Recognition of Emotion and Speech from Gestures and Facial Expressions. 2022 6th International Conference on Electronics, Communication and Aerospace Technology. :1360—1365.
The major mode of communication between hearing-impaired or mute people and others is sign language. Prior, most of the recognition systems for sign language had been set simply to recognize hand signs and convey them as text. However, the proposed model tries to provide speech to the mute. Firstly, hand gestures for sign language recognition and facial emotions are trained using CNN (Convolutional Neural Network) and then by training the emotion to speech model. Finally combining hand gestures and facial emotions to realize the emotion and speech.
Abbasi, Nida Itrat, Song, Siyang, Gunes, Hatice.  2022.  Statistical, Spectral and Graph Representations for Video-Based Facial Expression Recognition in Children. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :1725—1729.
Child facial expression recognition is a relatively less investigated area within affective computing. Children’s facial expressions differ significantly from adults; thus, it is necessary to develop emotion recognition frameworks that are more objective, descriptive and specific to this target user group. In this paper we propose the first approach that (i) constructs video-level heterogeneous graph representation for facial expression recognition in children, and (ii) predicts children’s facial expressions using the automatically detected Action Units (AUs). To this aim, we construct three separate length-independent representations, namely, statistical, spectral and graph at video-level for detailed multi-level facial behaviour decoding (AU activation status, AU temporal dynamics and spatio-temporal AU activation patterns, respectively). Our experimental results on the LIRIS Children Spontaneous Facial Expression Video Database demonstrate that combining these three feature representations provides the highest accuracy for expression recognition in children.
Hamzah, Anwer Sattar, Abdul-Rahaim, Laith Ali.  2022.  Smart Homes Automation System Using Cloud Computing Based Enhancement Security. 2022 5th International Conference on Engineering Technology and its Applications (IICETA). :164—169.
Smart home automation is one of the prominent topics of the current era, which has attracted the attention of researchers for several years due to smart home automation contributes to achieving many capabilities, which have had a real and vital impact on our daily lives, such as comfort, energy conservation, environment, and security. Home security is one of the most important of these capabilities. Many efforts have been made on research and articles that focus on this area due to the increased rate of crime and theft. The present paper aims to build a practically implemented smart home that enhances home control management and monitors all home entrances that are often vulnerable to intrusion by intruders and thieves. The proposed system depends on identifying the person using the face detection and recognition method and Radio Frequency Identification (RFID) as a mechanism to enhance the performance of home security systems. The cloud server analyzes the received member identification to retrieve the permission to enter the home. The system showed effectiveness and speed of response in transmitting live captures of any illegal intrusive activity at the door or windows of the house. With the growth and expansion of the concept of smart homes, the amount of information transmitted, information security weakness, and response time disturbances, to reduce latency, data storage, and maintain information security, by employing Fog computing architecture in smart homes as a broker between the IoT layer and the cloud servers and the user layer.
2023-06-09
Sain, Mangal, Normurodov, Oloviddin, Hong, Chen, Hui, Kueh Lee.  2022.  A Survey on the Security in Cyber Physical System with Multi-Factor Authentication. 2022 24th International Conference on Advanced Communication Technology (ICACT). :1—8.
Cyber-physical Systems can be defined as a complex networked control system, which normally develop by combining several physical components with the cyber space. Cyber Physical System are already a part of our daily life. As its already being a part of everyone life, CPS also have great potential security threats and can be vulnerable to various cyber-attacks without showing any sign directly to component failure. To protect user security and privacy is a fundamental concern of any kind of system; either it’s a simple web application or supplicated professional system. Digital Multifactor authentication is one of the best ways to make secure authentication. It covers many different areas of a Cyber-connected world, including online payments, communications, access right management, etc. Most of the time, Multifactor authentication is little complex as it requires extra step from users. This paper will discuss the evolution from single authentication to Multi-Factor Authentication (MFA) starting from Single-Factor Authentication (SFA) and through Two-Factor Authentication (2FA). This paper seeks to analyze and evaluate the most prominent authentication techniques based on accuracy, cost, and feasibility of implementation. We also suggest several authentication schemes which incorporate with Multifactor authentication for CPS.
2023-04-14
Raavi, Rupendra, Alqarni, Mansour, Hung, Patrick C.K.  2022.  Implementation of Machine Learning for CAPTCHAs Authentication Using Facial Recognition. 2022 IEEE International Conference on Data Science and Information System (ICDSIS). :1–5.
Web-based technologies are evolving day by day and becoming more interactive and secure. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is one of the security features that help detect automated bots on the Web. Earlier captcha was complex designed text-based, but some optical recognition-based algorithms can be used to crack it. That is why now the captcha system is image-based. But after the arrival of strong image recognition algorithms, image-based captchas can also be cracked nowadays. In this paper, we propose a new captcha system that can be used to differentiate real humans and bots on the Web. We use advanced deep layers with pre-trained machine learning models for captchas authentication using a facial recognition system.
2023-03-31
Magfirawaty, Magfirawaty, Budi Setiawan, Fauzan, Yusuf, Muhammad, Kurniandi, Rizki, Nafis, Raihan Fauzan, Hayati, Nur.  2022.  Principal Component Analysis and Data Encryption Model for Face Recognition System. 2022 2nd International Conference on Electronic and Electrical Engineering and Intelligent System (ICE3IS). :381–386.

Face recognition is a biometric technique that uses a computer or machine to facilitate the recognition of human faces. The advantage of this technique is that it can detect faces without direct contact with the device. In its application, the security of face recognition data systems is still not given much attention. Therefore, this study proposes a technique for securing data stored in the face recognition system database. It implements the Viola-Jones Algorithm, the Kanade-Lucas-Tomasi Algorithm (KLT), and the Principal Component Analysis (PCA) algorithm by applying a database security algorithm using XOR encryption. Several tests and analyzes have been performed with this method. The histogram analysis results show no visual information related to encrypted images with plain images. In addition, the correlation value between the encrypted and plain images is weak, so it has high security against statistical attacks with an entropy value of around 7.9. The average time required to carry out the introduction process is 0.7896 s.

Bauspieß, Pia, Olafsson, Jonas, Kolberg, Jascha, Drozdowski, Pawel, Rathgeb, Christian, Busch, Christoph.  2022.  Improved Homomorphically Encrypted Biometric Identification Using Coefficient Packing. 2022 International Workshop on Biometrics and Forensics (IWBF). :1–6.

Efficient large-scale biometric identification is a challenging open problem in biometrics today. Adding biometric information protection by cryptographic techniques increases the computational workload even further. Therefore, this paper proposes an efficient and improved use of coefficient packing for homomorphically protected biometric templates, allowing for the evaluation of multiple biometric comparisons at the cost of one. In combination with feature dimensionality reduction, the proposed technique facilitates a quadratic computational workload reduction for biometric identification, while long-term protection of the sensitive biometric data is maintained throughout the system. In previous works on using coefficient packing, only a linear speed-up was reported. In an experimental evaluation on a public face database, efficient identification in the encrypted domain is achieved on off-the-shelf hardware with no loss in recognition performance. In particular, the proposed improved use of coefficient packing allows for a computational workload reduction down to 1.6% of a conventional homomorphically protected identification system without improved packing.

Román, Roberto, Arjona, Rosario, López-González, Paula, Baturone, Iluminada.  2022.  A Quantum-Resistant Face Template Protection Scheme using Kyber and Saber Public Key Encryption Algorithms. 2022 International Conference of the Biometrics Special Interest Group (BIOSIG). :1–5.

Considered sensitive information by the ISO/IEC 24745, biometric data should be stored and used in a protected way. If not, privacy and security of end-users can be compromised. Also, the advent of quantum computers demands quantum-resistant solutions. This work proposes the use of Kyber and Saber public key encryption (PKE) algorithms together with homomorphic encryption (HE) in a face recognition system. Kyber and Saber, both based on lattice cryptography, were two finalists of the third round of NIST post-quantum cryptography standardization process. After the third round was completed, Kyber was selected as the PKE algorithm to be standardized. Experimental results show that recognition performance of the non-protected face recognition system is preserved with the protection, achieving smaller sizes of protected templates and keys, and shorter execution times than other HE schemes reported in literature that employ lattices. The parameter sets considered achieve security levels of 128, 192 and 256 bits.

ISSN: 1617-5468

Hofbauer, Heinz, Martínez-Díaz, Yoanna, Luevano, Luis Santiago, Méndez-Vázquez, Heydi, Uhl, Andreas.  2022.  Utilizing CNNs for Cryptanalysis of Selective Biometric Face Sample Encryption. 2022 26th International Conference on Pattern Recognition (ICPR). :892–899.

When storing face biometric samples in accordance with ISO/IEC 19794 as JPEG2000 encoded images, it is necessary to encrypt them for the sake of users’ privacy. Literature suggests selective encryption of JPEG2000 images as fast and efficient method for encryption, the trade-off is that some information is left in plaintext. This could be used by an attacker, in case the encrypted biometric samples are leaked. In this work, we will attempt to utilize a convolutional neural network to perform cryptanalysis of the encryption scheme. That is, we want to assess if there is any information left in plaintext in the selectively encrypted face images which can be used to identify the person. The chosen approach is to train CNNs for biometric face recognition not only with plaintext face samples but additionally conduct a refinement training with partially encrypted data. If this system can successfully utilize encrypted face samples for biometric matching, we can show that the information left in encrypted biometric face samples is information actually usable for biometric recognition.The method works and we can show that a supposedly secure biometric sample still contains identifying information on average over the whole database.

ISSN: 2831-7475

Kahla, Mostafa, Chen, Si, Just, Hoang Anh, Jia, Ruoxi.  2022.  Label-Only Model Inversion Attacks via Boundary Repulsion. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :15025–15033.
Recent studies show that the state-of-the-art deep neural networks are vulnerable to model inversion attacks, in which access to a model is abused to reconstruct private training data of any given target class. Existing attacks rely on having access to either the complete target model (whitebox) or the model's soft-labels (blackbox). However, no prior work has been done in the harder but more practical scenario, in which the attacker only has access to the model's predicted label, without a confidence measure. In this paper, we introduce an algorithm, Boundary-Repelling Model Inversion (BREP-MI), to invert private training data using only the target model's predicted labels. The key idea of our algorithm is to evaluate the model's predicted labels over a sphere and then estimate the direction to reach the target class's centroid. Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data for various datasets and target model architectures. We compare BREP-MI with the state-of-the-art white-box and blackbox model inversion attacks, and the results show that despite assuming less knowledge about the target model, BREP-MI outperforms the blackbox attack and achieves comparable results to the whitebox attack. Our code is available online.11https://github.com/m-kahla/Label-Only-Model-Inversion-Attacks-via-Boundary-Repulsion
2023-03-17
Liu, Qingyan, Albina, Erlito M..  2022.  Application of Face Recognition Technology in Mobile Payment. 2022 IEEE 12th International Conference on RFID Technology and Applications (RFID-TA). :217–219.
The current face recognition technology has rapidly come into the public life, from unlocking cell phone face to mobile payment, which has brought a lot of convenience to life. However, it is undeniable that it also brings security challenges. Based on this paper, we will discuss the risks of face recognition in the mobile payment and put forward relevant suggestions.
2023-02-03
Doshi, Om B., Bendale, Hitesh N., Chavan, Aarti M., More, Shraddha S..  2022.  A Smart Door Lock Security System using Internet of Things. 2022 International Conference on Applied Artificial Intelligence and Computing (ICAAIC). :1457–1463.
Security is a key concern across the world, and it has been a common thread for all critical sectors. Nowadays, it may be stated that security is a backbone that is absolutely necessary for personal safety. The most important requirements of security systems for individuals are protection against theft and trespassing. CCTV cameras are often employed for security purposes. The biggest disadvantage of CCTV cameras is their high cost and the need for a trustworthy individual to monitor them. As a result, a solution that is both easy and cost-effective, as well as secure has been devised. The smart door lock is built on Raspberry Pi technology, and it works by capturing a picture through the Pi Camera module, detecting a visitor's face, and then allowing them to enter. Local binary pattern approach is used for Face recognition. Remote picture viewing, notification, on mobile device are all possible with an IOT based application. The proposed system may be installed at front doors, lockers, offices, and other locations where security is required. The proposed system has an accuracy of 89%, with an average processing time is 20 seconds for the overall process.
2023-01-06
Jagadeesha, Nishchal.  2022.  Facial Privacy Preservation using FGSM and Universal Perturbation attacks. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:46—52.
Research done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human’s facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers’ privacy. By fooling face recognition models, these techniques cater superficially to the needs of user privacy, however, the presence of visible manipulations negatively affects the aesthetic of the image. The objective of this work is to highlight common adversarial techniques that can be used to introduce granular pixel distortions using white-box and black-box perturbation algorithms that ensure the privacy of users’ sensitive or personal data in face images, fooling AI facial recognition models while maintaining the aesthetics of and visual integrity of the image.
Abbasi, Wisam, Mori, Paolo, Saracino, Andrea, Frascolla, Valerio.  2022.  Privacy vs Accuracy Trade-Off in Privacy Aware Face Recognition in Smart Systems. 2022 IEEE Symposium on Computers and Communications (ISCC). :1—8.
This paper proposes a novel approach for privacy preserving face recognition aimed to formally define a trade-off optimization criterion between data privacy and algorithm accuracy. In our methodology, real world face images are anonymized with Gaussian blurring for privacy preservation. The anonymized images are processed for face detection, face alignment, face representation, and face verification. The proposed methodology has been validated with a set of experiments on a well known dataset and three face recognition classifiers. The results demonstrate the effectiveness of our approach to correctly verify face images with different levels of privacy and results accuracy, and to maximize privacy with the least negative impact on face detection and face verification accuracy.
2022-12-20
Singh, Inderjeet, Araki, Toshinori, Kakizaki, Kazuya.  2022.  Powerful Physical Adversarial Examples Against Practical Face Recognition Systems. 2022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW). :301–310.
It is well-known that the most existing machine learning (ML)-based safety-critical applications are vulnerable to carefully crafted input instances called adversarial examples (AXs). An adversary can conveniently attack these target systems from digital as well as physical worlds. This paper aims to the generation of robust physical AXs against face recognition systems. We present a novel smoothness loss function and a patch-noise combo attack for realizing powerful physical AXs. The smoothness loss interjects the concept of delayed constraints during the attack generation process, thereby causing better handling of optimization complexity and smoother AXs for the physical domain. The patch-noise combo attack combines patch noise and imperceptibly small noises from different distributions to generate powerful registration-based physical AXs. An extensive experimental analysis found that our smoothness loss results in robust and more transferable digital and physical AXs than the conventional techniques. Notably, our smoothness loss results in a 1.17 and 1.97 times better mean attack success rate (ASR) in physical white-box and black-box attacks, respectively. Our patch-noise combo attack furthers the performance gains and results in 2.39 and 4.74 times higher mean ASR than conventional technique in physical world white-box and black-box attacks, respectively.
ISSN: 2690-621X
Liu, Xiaolei, Li, Xiaoyu, Zheng, Desheng, Bai, Jiayu, Peng, Yu, Zhang, Shibin.  2022.  Automatic Selection Attacks Framework for Hard Label Black-Box Models. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–7.

The current adversarial attacks against machine learning models can be divided into white-box attacks and black-box attacks. Further the black-box can be subdivided into soft label and hard label black-box, but the latter has the deficiency of only returning the class with the highest prediction probability, which leads to the difficulty in gradient estimation. However, due to its wide application, it is of great research significance and application value to explore hard label blackbox attacks. This paper proposes an Automatic Selection Attacks Framework (ASAF) for hard label black-box models, which can be explained in two aspects based on the existing attack methods. Firstly, ASAF applies model equivalence to select substitute models automatically so as to generate adversarial examples and then completes black-box attacks based on their transferability. Secondly, specified feature selection and parallel attack method are proposed to shorten the attack time and improve the attack success rate. The experimental results show that ASAF can achieve more than 90% success rate of nontargeted attack on the common models of traditional dataset ResNet-101 (CIFAR10) and InceptionV4 (ImageNet). Meanwhile, compared with FGSM and other attack algorithms, the attack time is reduced by at least 89.7% and 87.8% respectively in two traditional datasets. Besides, it can achieve 90% success rate of attack on the online model, BaiduAI digital recognition. In conclusion, ASAF is the first automatic selection attacks framework for hard label blackbox models, in which specified feature selection and parallel attack methods speed up automatic attacks.

2022-12-01
Srikanth, K S, Ramesh, T K, Palaniswamy, Suja, Srinivasan, Ranganathan.  2022.  XAI based model evaluation by applying domain knowledge. 2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). :1—6.
Artificial intelligence(AI) is used in decision support systems which learn and perceive features as a function of the number of layers and the weights computed during training. Due to their inherent black box nature, it is insufficient to consider accuracy, precision and recall as metrices for evaluating a model's performance. Domain knowledge is also essential to identify features that are significant by the model to arrive at its decision. In this paper, we consider a use case of face mask recognition to explain the application and benefits of XAI. Eight models used to solve the face mask recognition problem were selected. GradCAM Explainable AI (XAI) is used to explain the state-of-art models. Models that were selecting incorrect features were eliminated even though, they had a high accuracy. Domain knowledge relevant to face mask recognition viz., facial feature importance is applied to identify the model that picked the most appropriate features to arrive at the decision. We demonstrate that models with high accuracies need not be necessarily select the right features. In applications requiring rapid deployment, this method can act as a deciding factor in shortlisting models with a guarantee that the models are looking at the right features for arriving at the classification. Furthermore, the outcomes of the model can be explained to the user enhancing their confidence on the AI model being deployed in the field.