Title | Facial Privacy Preservation using FGSM and Universal Perturbation attacks |
Publication Type | Conference Paper |
Year of Publication | 2022 |
Authors | Jagadeesha, Nishchal |
Conference Name | 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON) |
Date Published | may |
Keywords | Adversarial Machine Learning, AI, black-box attack, data privacy, DeepFool algorithm, face recognition, Facial Aesthetic preservation, Facial Privacy, Fast Gradient Sign Method (FGSM), human factors, parallel processing, Perturbation methods, Prediction algorithms, privacy, Privacy attributes, pubcrawl, resilience, Resiliency, Scalability, Universal Perturbation, visualization, White-Box attack |
Abstract | Research done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human's facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers' privacy. By fooling face recognition models, these techniques cater superficially to the needs of user privacy, however, the presence of visible manipulations negatively affects the aesthetic of the image. The objective of this work is to highlight common adversarial techniques that can be used to introduce granular pixel distortions using white-box and black-box perturbation algorithms that ensure the privacy of users' sensitive or personal data in face images, fooling AI facial recognition models while maintaining the aesthetics of and visual integrity of the image. |
DOI | 10.1109/COM-IT-CON54601.2022.9850531 |
Citation Key | jagadeesha_facial_2022 |