Visible to the public Biblio

Filters: Keyword is black-box models  [Clear All Filters]
2021-03-09
Rojas-Dueñas, G., Riba, J., Kahalerras, K., Moreno-Eguilaz, M., Kadechkar, A., Gomez-Pau, A..  2020.  Black-Box Modelling of a DC-DC Buck Converter Based on a Recurrent Neural Network. 2020 IEEE International Conference on Industrial Technology (ICIT). :456–461.
Artificial neural networks allow the identification of black-box models. This paper proposes a method aimed at replicating the static and dynamic behavior of a DC-DC power converter based on a recurrent nonlinear autoregressive exogenous neural network. The method proposed in this work applies an algorithm that trains a neural network based on the inputs and outputs (currents and voltages) of a Buck converter. The approach is validated by means of simulated data of a realistic nonsynchronous Buck converter model programmed in Simulink and by means of experimental results. The predictions made by the neural network are compared to the actual outputs of the system, to determine the accuracy of the method, thus validating the proposed approach. Both simulation and experimental results show the feasibility and accuracy of the proposed black-box approach.
Cui, W., Li, X., Huang, J., Wang, W., Wang, S., Chen, J..  2020.  Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation. 2020 IEEE International Conference on Image Processing (ICIP). :648–652.
Although deep convolutional neural network (CNN) performs well in many computer vision tasks, its classification mechanism is very vulnerable when it is exposed to the perturbation of adversarial attacks. In this paper, we proposed a new algorithm to generate the substitute model of black-box CNN models by using knowledge distillation. The proposed algorithm distills multiple CNN teacher models to a compact student model as the substitution of other black-box CNN models to be attacked. The black-box adversarial samples can be consequently generated on this substitute model by using various white-box attacking methods. According to our experiments on ResNet18 and DenseNet121, our algorithm boosts the attacking success rate (ASR) by 20% by training the substitute model based on knowledge distillation.
2021-03-01
Kuppa, A., Le-Khac, N.-A..  2020.  Black Box Attacks on Explainable Artificial Intelligence(XAI) methods in Cyber Security. 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.

Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.