Visible to the public Biblio

Filters: Keyword is White-Box attack  [Clear All Filters]
2023-01-06
Jagadeesha, Nishchal.  2022.  Facial Privacy Preservation using FGSM and Universal Perturbation attacks. 2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON). 1:46—52.
Research done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human’s facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers’ privacy. By fooling face recognition models, these techniques cater superficially to the needs of user privacy, however, the presence of visible manipulations negatively affects the aesthetic of the image. The objective of this work is to highlight common adversarial techniques that can be used to introduce granular pixel distortions using white-box and black-box perturbation algorithms that ensure the privacy of users’ sensitive or personal data in face images, fooling AI facial recognition models while maintaining the aesthetics of and visual integrity of the image.
2022-11-18
Khoshavi, Navid, Sargolzaei, Saman, Bi, Yu, Roohi, Arman.  2021.  Entropy-Based Modeling for Estimating Adversarial Bit-flip Attack Impact on Binarized Neural Network. 2021 26th Asia and South Pacific Design Automation Conference (ASP-DAC). :493–498.
Over past years, the high demand to efficiently process deep learning (DL) models has driven the market of the chip design companies. However, the new Deep Chip architectures, a common term to refer to DL hardware accelerator, have slightly paid attention to the security requirements in quantized neural networks (QNNs), while the black/white -box adversarial attacks can jeopardize the integrity of the inference accelerator. Therefore in this paper, a comprehensive study of the resiliency of QNN topologies to black-box attacks is examined. Herein, different attack scenarios are performed on an FPGA-processor co-design, and the collected results are extensively analyzed to give an estimation of the impact’s degree of different types of attacks on the QNN topology. To be specific, we evaluated the sensitivity of the QNN accelerator to a range number of bit-flip attacks (BFAs) that might occur in the operational lifetime of the device. The BFAs are injected at uniformly distributed times either across the entire QNN or per individual layer during the image classification. The acquired results are utilized to build the entropy-based model that can be leveraged to construct resilient QNN architectures to bit-flip attacks.
Tian, Pu, Hatcher, William Grant, Liao, Weixian, Yu, Wei, Blasch, Erik.  2021.  FALIoTSE: Towards Federated Adversarial Learning for IoT Search Engine Resiliency. 2021 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :290–297.
To improve efficiency and resource usage in data retrieval, an Internet of Things (IoT) search engine organizes a vast amount of scattered data and responds to client queries with processed results. Machine learning provides a deep understanding of complex patterns and enables enhanced feedback to users through well-trained models. Nonetheless, machine learning models are prone to adversarial attacks via the injection of elaborate perturbations, resulting in subverted outputs. Particularly, adversarial attacks on time-series data demand urgent attention, as sensors in IoT systems are collecting an increasing volume of sequential data. This paper investigates adversarial attacks on time-series analysis in an IoT search engine (IoTSE) system. Specifically, we consider the Long Short-Term Memory (LSTM) Recurrent Neural Network (RNN) as our base model, implemented in a simulated federated learning scheme. We propose the Federated Adversarial Learning for IoT Search Engine (FALIoTSE) that exploits the shared parameters of the federated model as the target for adversarial example generation and resiliency. Using a real-world smart parking garage dataset, the impact of an attack on FALIoTSE is demonstrated under various levels of perturbation. The experiments show that the training error increases significantly with noises from the gradient.
2019-01-16
Bai, X., Niu, W., Liu, J., Gao, X., Xiang, Y., Liu, J..  2018.  Adversarial Examples Construction Towards White-Box Q Table Variation in DQN Pathfinding Training. 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC). :781–787.

As a new research hotspot in the field of artificial intelligence, deep reinforcement learning (DRL) has achieved certain success in various fields such as robot control, computer vision, natural language processing and so on. At the same time, the possibility of its application being attacked and whether it have a strong resistance to strike has also become a hot topic in recent years. Therefore, we select the representative Deep Q Network (DQN) algorithm in deep reinforcement learning, and use the robotic automatic pathfinding application as a countermeasure application scenario for the first time, and attack DQN algorithm against the vulnerability of the adversarial samples. In this paper, we first use DQN to find the optimal path, and analyze the rules of DQN pathfinding. Then, we propose a method that can effectively find vulnerable points towards White-Box Q table variation in DQN pathfinding training. Finally, we build a simulation environment as a basic experimental platform to test our method, through multiple experiments, we can successfully find the adversarial examples and the experimental results show that the supervised method we proposed is effective.