Visible to the public Biblio

Filters: Keyword is black-box  [Clear All Filters]
2022-12-20
Lin, Xuanwei, Dong, Chen, Liu, Ximeng, Zhang, Yuanyuan.  2022.  SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic. 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid). :366–375.
With the future 6G era, spiking neural networks (SNNs) can be powerful processing tools in various areas due to their strong artificial intelligence (AI) processing capabilities, such as biometric recognition, AI robotics, autonomous drive, and healthcare. However, within Cyber Physical System (CPS), SNNs are surprisingly vulnerable to adversarial examples generated by benign samples with human-imperceptible noise, this will lead to serious consequences such as face recognition anomalies, autonomous drive-out of control, and wrong medical diagnosis. Only by fully understanding the principles of adversarial attacks with adversarial samples can we defend against them. Nowadays, most existing adversarial attacks result in a severe accuracy degradation to trained SNNs. Still, the critical issue is that they only generate adversarial samples by randomly adding, deleting, and flipping spike trains, making them easy to identify by filters, even by human eyes. Besides, the attack performance and speed also can be improved further. Hence, Spike Probabilistic Attack (SPA) is presented in this paper and aims to generate adversarial samples with more minor perturbations, greater model accuracy degradation, and faster iteration. SPA uses Poisson coding to generate spikes as probabilities, directly converting input data into spikes for faster speed and generating uniformly distributed perturbation for better attack performance. Moreover, an objective function is constructed for minor perturbations and keeping attack success rate, which speeds up the convergence by adjusting parameters. Both white-box and black-box settings are conducted to evaluate the merits of SPA. Experimental results show the model's accuracy under white-box attack decreases by 9.2S% 31.1S% better than others, and average success rates are 74.87% under the black-box setting. The experimental results indicate that SPA has better attack performance than other existing attacks in the white-box and better transferability performance in the black-box setting,
2022-01-31
Wang, Xiying, Ni, Rongrong, Li, Wenjie, Zhao, Yao.  2021.  Adversarial Attack on Fake-Faces Detectors Under White and Black Box Scenarios. 2021 IEEE International Conference on Image Processing (ICIP). :3627–3631.
Generative Adversarial Network (GAN) models have been widely used in various fields. More recently, styleGAN and styleGAN2 have been developed to synthesize faces that are indistinguishable to the human eyes, which could pose a threat to public security. But latest work has shown that it is possible to identify fakes using powerful CNN networks as classifiers. However, the reliability of these techniques is unknown. Therefore, in this paper we focus on the generation of content-preserving images from fake faces to spoof classifiers. Two GAN-based frameworks are proposed to achieve the goal in the white-box and black-box. For the white-box, a network without up/down sampling is proposed to generate face images to confuse the classifier. In the black-box scenario (where the classifier is unknown), real data is introduced as a guidance for GAN structure to make it adversarial, and a Real Extractor as an auxiliary network to constrain the feature distance between the generated images and the real data to enhance the adversarial capability. Experimental results show that the proposed method effectively reduces the detection accuracy of forensic models with good transferability.
2021-03-09
Guibene, K., Ayaida, M., Khoukhi, L., MESSAI, N..  2020.  Black-box System Identification of CPS Protected by a Watermark-based Detector. 2020 IEEE 45th Conference on Local Computer Networks (LCN). :341–344.

The implication of Cyber-Physical Systems (CPS) in critical infrastructures (e.g., smart grids, water distribution networks, etc.) has introduced new security issues and vulnerabilities to those systems. In this paper, we demonstrate that black-box system identification using Support Vector Regression (SVR) can be used efficiently to build a model of a given industrial system even when this system is protected with a watermark-based detector. First, we briefly describe the Tennessee Eastman Process used in this study. Then, we present the principal of detection scheme and the theory behind SVR. Finally, we design an efficient black-box SVR algorithm for the Tennessee Eastman Process. Extensive simulations prove the efficiency of our proposed algorithm.

2021-03-04
Algehed, M., Flanagan, C..  2020.  Transparent IFC Enforcement: Possibility and (In)Efficiency Results. 2020 IEEE 33rd Computer Security Foundations Symposium (CSF). :65—78.

Information Flow Control (IFC) is a collection of techniques for ensuring a no-write-down no-read-up style security policy known as noninterference. Traditional methods for both static (e.g. type systems) and dynamic (e.g. runtime monitors) IFC suffer from untenable numbers of false alarms on real-world programs. Secure Multi-Execution (SME) promises to provide secure information flow control without modifying the behaviour of already secure programs, a property commonly referred to as transparency. Implementations of SME exist for the web in the form of the FlowFox browser and as plug-ins to several programming languages. Furthermore, SME can in theory work in a black-box manner, meaning that it can be programming language agnostic, making it perfect for securing legacy or third-party systems. As such SME, and its variants like Multiple Facets (MF) and Faceted Secure Multi-Execution (FSME), appear to be a family of panaceas for the security engineer. The question is, how come, given all these advantages, that these techniques are not ubiquitous in practice? The answer lies, partially, in the issue of runtime and memory overhead. SME and its variants are prohibitively expensive to deploy in many non-trivial situations. The natural question is why is this the case? On the surface, the reason is simple. The techniques in the SME family all rely on the idea of multi-execution, running all or parts of a program multiple times to achieve noninterference. Naturally, this causes some overhead. However, the predominant thinking in the IFC community has been that these overheads can be overcome. In this paper we argue that there are fundamental reasons to expect this not to be the case and prove two key theorems: (1) All transparent enforcement is polynomial time equivalent to multi-execution. (2) All black-box enforcement takes time exponential in the number of principals in the security lattice. Our methods also allow us to answer, in the affirmative, an open question about the possibility of secure and transparent enforcement of a security condition known as Termination Insensitive Noninterference.

2020-09-04
Taori, Rohan, Kamsetty, Amog, Chu, Brenton, Vemuri, Nikita.  2019.  Targeted Adversarial Examples for Black Box Audio Systems. 2019 IEEE Security and Privacy Workshops (SPW). :15—20.
The application of deep recurrent networks to audio transcription has led to impressive gains in automatic speech recognition (ASR) systems. Many have demonstrated that small adversarial perturbations can fool deep neural networks into incorrectly predicting a specified target with high confidence. Current work on fooling ASR systems have focused on white-box attacks, in which the model architecture and parameters are known. In this paper, we adopt a black-box approach to adversarial generation, combining the approaches of both genetic algorithms and gradient estimation to solve the task. We achieve a 89.25% targeted attack similarity, with 35% targeted attack success rate, after 3000 generations while maintaining 94.6% audio file similarity.
2019-01-16
Peake, Georgina, Wang, Jun.  2018.  Explanation Mining: Post Hoc Interpretability of Latent Factor Models for Recommendation Systems. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. :2060–2069.
The widescale use of machine learning algorithms to drive decision-making has highlighted the critical importance of ensuring the interpretability of such models in order to engender trust in their output. The state-of-the-art recommendation systems use black-box latent factor models that provide no explanation of why a recommendation has been made, as they abstract their decision processes to a high-dimensional latent space which is beyond the direct comprehension of humans. We propose a novel approach for extracting explanations from latent factor recommendation systems by training association rules on the output of a matrix factorisation black-box model. By taking advantage of the interpretable structure of association rules, we demonstrate that predictive accuracy of the recommendation model can be maintained whilst yielding explanations with high fidelity to the black-box model on a unique industry dataset. Our approach mitigates the accuracy-interpretability trade-off whilst avoiding the need to sacrifice flexibility or use external data sources. We also contribute to the ill-defined problem of evaluating interpretability.
2017-11-01
Ben Jaballah, Wafa, Kheir, Nizar.  2016.  A Grey-Box Approach for Detecting Malicious User Interactions in Web Applications. Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :1–12.
Web applications are the core enabler for most Internet services today. Their standard interfaces allow them to be composed together in different ways in order to support different service workflows. While the modular composition of applications has considerably simplified the provisioning of new Internet services, it has also added new security challenges; the impact of a security breach propagating through the chain far beyond the vulnerable application. To secure web applications, two distinct approaches have been commonly used in the literature. First, white-box approaches leverage the source code in order to detect and fix unintended flaws. Although they cover well the intrinsic flaws within each application, they can barely leverage logic flaws that arise when connecting multiple applications within the same service. On the other hand, black-box approaches analyze the workflow of a service through a set of user interactions, while assuming only little information about its embedded applications. These approaches may have a better coverage, but suffer from a high false positives rate. So far, to the best of our knowledge, there is not yet a single solution that combines both approaches into a common framework. In this paper, we present a new grey-box approach that leverages the advantages of both white-box and black-box. The core component of our system is a semi-supervised learning framework that first learns the nominal behavior of the service using a set of elementary user interactions, and then prune this nominal behavior from attacks that may have occurred during the learning phase. To do so, we leverage a graph-based representation of known attack scenarios that is built using a white-box approach. We demonstrate in this paper the use of our system through a practical use case, including real world attack scenarios that we were able to detect and qualify using our approach.
2015-05-06
Ravindran, K., Rabby, M., Adiththan, A..  2014.  Model-based control of device replication for trusted data collection. Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2014 Workshop on. :1-6.

Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.
 

2015-04-30
Ravindran, K., Rabby, M., Adiththan, A..  2014.  Model-based control of device replication for trusted data collection. Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2014 Workshop on. :1-6.

Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.