Visible to the public Biblio

Filters: Author is Khalid, F.  [Clear All Filters]
2021-04-27
Marchisio, A., Nanfa, G., Khalid, F., Hanif, M. A., Martina, M., Shafique, M..  2020.  Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks 2020 International Joint Conference on Neural Networks (IJCNN). :1–8.
Spiking Neural Networks (SNNs) claim to present many advantages in terms of biological plausibility and energy efficiency compared to standard Deep Neural Networks (DNNs). Recent works have shown that DNNs are vulnerable to adversarial attacks, i.e., small perturbations added to the input data can lead to targeted or random misclassifications. In this paper, we aim at investigating the key research question: "Are SNNs secure?" Towards this, we perform a comparative study of the security vulnerabilities in SNNs and DNNs w.r.t. the adversarial noise. Afterwards, we propose a novel black-box attack methodology, i.e., without the knowledge of the internal structure of the SNN, which employs a greedy heuristic to automatically generate imperceptible and robust adversarial examples (i.e., attack images) for the given SNN. We perform an in-depth evaluation for a Spiking Deep Belief Network (SDBN) and a DNN having the same number of layers and neurons (to obtain a fair comparison), in order to study the efficiency of our methodology and to understand the differences between SNNs and DNNs w.r.t. the adversarial examples. Our work opens new avenues of research towards the robustness of the SNNs, considering their similarities to the human brain's functionality.
2020-11-04
Khalid, F., Hanif, M. A., Rehman, S., Ahmed, R., Shafique, M..  2019.  TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :188—193.

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.

2018-04-11
Khalid, F., Hasan, S. R., Hasan, O., Awwadl, F..  2017.  Behavior Profiling of Power Distribution Networks for Runtime Hardware Trojan Detection. 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS). :1316–1319.

Runtime hardware Trojan detection techniques are required in third party IP based SoCs as a last line of defense. Traditional techniques rely on golden data model or exotic signal processing techniques such as utilizing Choas theory or machine learning. Due to cumbersome implementation of such techniques, it is highly impractical to embed them on the hardware, which is a requirement in some mission critical applications. In this paper, we propose a methodology that generates a digital power profile during the manufacturing test phase of the circuit under test. A simple processing mechanism, which requires minimal computation of measured power signals, is proposed. For the proof of concept, we have applied the proposed methodology on a classical Advanced Encryption Standard circuit with 21 available Trojans. The experimental results show that the proposed methodology is able to detect 75% of the intrusions with the potential of implementing the detection mechanism on-chip with minimal overhead compared to the state-of-the-art techniques.