Visible to the public Facial Privacy Preservation using FGSM and Universal Perturbation attacks

TitleFacial Privacy Preservation using FGSM and Universal Perturbation attacks
Publication TypeConference Paper
Year of Publication2022
AuthorsJagadeesha, Nishchal
Conference Name2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON)
Date Publishedmay
KeywordsAdversarial Machine Learning, AI, black-box attack, data privacy, DeepFool algorithm, face recognition, Facial Aesthetic preservation, Facial Privacy, Fast Gradient Sign Method (FGSM), human factors, parallel processing, Perturbation methods, Prediction algorithms, privacy, Privacy attributes, pubcrawl, resilience, Resiliency, Scalability, Universal Perturbation, visualization, White-Box attack
AbstractResearch done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human's facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers' privacy. By fooling face recognition models, these techniques cater superficially to the needs of user privacy, however, the presence of visible manipulations negatively affects the aesthetic of the image. The objective of this work is to highlight common adversarial techniques that can be used to introduce granular pixel distortions using white-box and black-box perturbation algorithms that ensure the privacy of users' sensitive or personal data in face images, fooling AI facial recognition models while maintaining the aesthetics of and visual integrity of the image.
DOI10.1109/COM-IT-CON54601.2022.9850531
Citation Keyjagadeesha_facial_2022