Visible to the public Biblio

Found 104 results

Filters: Keyword is Perturbation methods  [Clear All Filters]
2022-02-04
Caskey, Susan A., Gunda, Thushara, Wingo, Jamie, Williams, Adam D..  2021.  Leveraging Resilience Metrics to Support Security System Analysis. 2021 IEEE International Symposium on Technologies for Homeland Security (HST). :1–7.
Resilience has been defined as a priority for the US critical infrastructure. This paper presents a process for incorporating resiliency-derived metrics into security system evaluations. To support this analysis, we used a multi-layer network model (MLN) reflecting the defined security system of a hypothetical nuclear power plant to define what metrics would be useful in understanding a system’s ability to absorb perturbation (i.e., system resilience). We defined measures focusing on the system’s criticality, rapidity, diversity, and confidence at each network layer, simulated adversary path, and the system as a basis for understanding the system’s resilience. For this hypothetical system, our metrics indicated the importance of physical infrastructure to overall system criticality, the relative confidence of physical sensors, and the lack of diversity in assessment activities (i.e., dependence on human evaluations). Refined model design and data outputs will enable more nuanced evaluations into temporal, geospatial, and human behavior considerations. Future studies can also extend these methodologies to capture respond and recover aspects of resilience, further supporting the protection of critical infrastructure.
2022-01-31
Dai, Wei, Berleant, Daniel.  2021.  Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor Perturbation. 2021 IEEE International Conference on Big Data (Big Data). :5085–5094.
Deep learning (DL) classifiers are often unstable in that they may change significantly when retested on perturbed images or low quality images. This paper adds to the fundamental body of work on the robustness of DL classifiers. We introduce a new two-dimensional benchmarking matrix to evaluate robustness of DL classifiers, and we also innovate a four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we create comprehensive 69 benchmarking image sets, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt & pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt & pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and results are shared on the GitHub website at https://github.com/caperock/robustai to support future academic research and industry projects.
Kumová, Věra, Pilát, Martin.  2021.  Beating White-Box Defenses with Black-Box Attacks. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Deep learning has achieved great results in the last decade, however, it is sensitive to so called adversarial attacks - small perturbations of the input that cause the network to classify incorrectly. In the last years a number of attacks and defenses against these attacks were described. Most of the defenses however focus on defending against gradient-based attacks. In this paper, we describe an evolutionary attack and show that the adversarial examples produced by the attack have different features than those from gradient-based attacks. We also show that these features mean that one of the state-of-the-art defenses fails to detect such attacks.
2022-01-25
Sun, Hao, Xu, Yanjie, Kuang, Gangyao, Chen, Jin.  2021.  Adversarial Robustness Evaluation of Deep Convolutional Neural Network Based SAR ATR Algorithm. 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS. :5263–5266.
Robustness, both to accident and to malevolent perturbations, is a crucial determinant of the successful deployment of deep convolutional neural network based SAR ATR systems in various security-sensitive applications. This paper performs a detailed adversarial robustness evaluation of deep convolutional neural network based SAR ATR models across two public available SAR target recognition datasets. For each model, seven different adversarial perturbations, ranging from gradient based optimization to self-supervised feature distortion, are generated for each testing image. Besides adversarial average recognition accuracy, feature attribution techniques have also been adopted to analyze the feature diffusion effect of adversarial attacks, which promotes the understanding of vulnerability of deep learning models.
2022-01-11
McCarthy, Andrew, Andriotis, Panagiotis, Ghadafi, Essam, Legg, Phil.  2021.  Feature Vulnerability and Robustness Assessment against Adversarial Machine Learning Attacks. 2021 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA). :1–8.
Whilst machine learning has been widely adopted for various domains, it is important to consider how such techniques may be susceptible to malicious users through adversarial attacks. Given a trained classifier, a malicious attack may attempt to craft a data observation whereby the data features purposefully trigger the classifier to yield incorrect responses. This has been observed in various image classification tasks, including falsifying road sign detection and facial recognition, which could have severe consequences in real-world deployment. In this work, we investigate how these attacks could impact on network traffic analysis, and how a system could perform misclassification of common network attacks such as DDoS attacks. Using the CICIDS2017 data, we examine how vulnerable the data features used for intrusion detection are to perturbation attacks using FGSM adversarial examples. As a result, our method provides a defensive approach for assessing feature robustness that seeks to balance between classification accuracy whilst minimising the attack surface of the feature space.
2022-01-10
Roy, Kashob Kumar, Roy, Amit, Mahbubur Rahman, A K M, Amin, M Ashraful, Ali, Amin Ahsan.  2021.  Structure-Aware Hierarchical Graph Pooling using Information Bottleneck. 2021 International Joint Conference on Neural Networks (IJCNN). :1–8.
Graph pooling is an essential ingredient of Graph Neural Networks (GNNs) in graph classification and regression tasks. For these tasks, different pooling strategies have been proposed to generate a graph-level representation by downsampling and summarizing nodes' features in a graph. However, most existing pooling methods are unable to capture distinguishable structural information effectively. Besides, they are prone to adversarial attacks. In this work, we propose a novel pooling method named as HIBPool where we leverage the Information Bottleneck (IB) principle that optimally balances the expressiveness and robustness of a model to learn representations of input data. Furthermore, we introduce a novel structure-aware Discriminative Pooling Readout (DiP-Readout) function to capture the informative local subgraph structures in the graph. Finally, our experimental results show that our model significantly outperforms other state-of-art methods on several graph classification benchmarks and more resilient to feature-perturbation attack than existing pooling methods11Source code at: https://github.com/forkkr/HIBPool.
2021-12-20
Sahay, Rajeev, Brinton, Christopher G., Love, David J..  2021.  Frequency-based Automated Modulation Classification in the Presence of Adversaries. ICC 2021 - IEEE International Conference on Communications. :1–6.
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals. Recent work has demonstrated the ability of deep learning to achieve robust AMC performance using raw in-phase and quadrature (IQ) time samples. Yet, deep learning models are highly susceptible to adversarial interference, which cause intelligent prediction models to misclassify received samples with high confidence. Furthermore, adversarial interference is often transferable, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification network. In this work, we present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference. Specifically, we show that adversarial attacks crafted to fool models trained on time-domain features are not easily transferable to models trained using frequency-domain features. In this capacity, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs). We further demonstrate our frequency feature-based classification models to achieve accuracies greater than 99% in the absence of attacks.
2021-11-30
Kserawi, Fawaz, Malluhi, Qutaibah M..  2020.  Privacy Preservation of Aggregated Data Using Virtual Battery in the Smart Grid. 2020 IEEE 6th International Conference on Dependability in Sensor, Cloud and Big Data Systems and Application (DependSys). :106–111.
Smart Meters (SM) are IoT end devices used to collect user utility consumption with limited processing power on the edge of the smart grid (SG). While SMs have great applications in providing data analysis to the utility provider and consumers, private user information can be inferred from SMs readings. For preserving user privacy, a number of methods were developed that use perturbation by adding noise to alter user load and hide consumer data. Most methods limit the amount of perturbation noise using differential privacy to preserve the benefits of data analysis. However, additive noise perturbation may have an undesirable effect on billing. Additionally, users may desire to select complete privacy without giving consent to having their data analyzed. We present a virtual battery model that uses perturbation with additive noise obtained from a virtual chargeable battery. The level of noise can be set to make user data differentially private preserving statistics or break differential privacy discarding the benefits of data analysis for more privacy. Our model uses fog aggregation with authentication and encryption that employs lightweight cryptographic primitives. We use Diffie-Hellman key exchange for symmetrical encryption of transferred data and a two-way challenge-response method for authentication.
2021-11-29
Yilmaz, Ibrahim, Siraj, Ambareen, Ulybyshev, Denis.  2020.  Improving DGA-Based Malicious Domain Classifiers for Malware Defense with Adversarial Machine Learning. 2020 IEEE 4th Conference on Information Communication Technology (CICT). :1–6.
Domain Generation Algorithms (DGAs) are used by adversaries to establish Command and Control (C&C) server communications during cyber attacks. Blacklists of known/identified C&C domains are used as one of the defense mechanisms. However, static blacklists generated by signature-based approaches can neither keep up nor detect never-seen-before malicious domain names. To address this weakness, we applied a DGA-based malicious domain classifier using the Long Short-Term Memory (LSTM) method with a novel feature engineering technique. Our model's performance shows a greater accuracy compared to a previously reported model. Additionally, we propose a new adversarial machine learning-based method to generate never-before-seen malware-related domain families. We augment the training dataset with new samples to make the training of the models more effective in detecting never-before-seen malicious domain names. To protect blacklists of malicious domain names against adversarial access and modifications, we devise secure data containers to store and transfer blacklists.
2021-11-08
Wilhjelm, Carl, Younis, Awad A..  2020.  A Threat Analysis Methodology for Security Requirements Elicitation in Machine Learning Based Systems. 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C). :426–433.
Machine learning (ML) models are now a key component for many applications. However, machine learning based systems (MLBSs), those systems that incorporate them, have proven vulnerable to various new attacks as a result. Currently, there exists no systematic process for eliciting security requirements for MLBSs that incorporates the identification of adversarial machine learning (AML) threats with those of a traditional non-MLBS. In this research study, we explore the applicability of traditional threat modeling and existing attack libraries in addressing MLBS security in the requirements phase. Using an example MLBS, we examined the applicability of 1) DFD and STRIDE in enumerating AML threats; 2) Microsoft SDL AI/ML Bug Bar in ranking the impact of the identified threats; and 3) the Microsoft AML attack library in eliciting threat mitigations to MLBSs. Such a method has the potential to assist team members, even with only domain specific knowledge, to collaboratively mitigate MLBS threats.
2021-10-12
Gouk, Henry, Hospedales, Timothy M..  2020.  Optimising Network Architectures for Provable Adversarial Robustness. 2020 Sensor Signal Processing for Defence Conference (SSPD). :1–5.
Existing Lipschitz-based provable defences to adversarial examples only cover the L2 threat model. We introduce the first bound that makes use of Lipschitz continuity to provide a more general guarantee for threat models based on any Lp norm. Additionally, a new strategy is proposed for designing network architectures that exhibit superior provable adversarial robustness over conventional convolutional neural networks. Experiments are conducted to validate our theoretical contributions, show that the assumptions made during the design of our novel architecture hold in practice, and quantify the empirical robustness of several Lipschitz-based adversarial defence methods.
Zhao, Haojun, Lin, Yun, Gao, Song, Yu, Shui.  2020.  Evaluating and Improving Adversarial Attacks on DNN-Based Modulation Recognition. GLOBECOM 2020 - 2020 IEEE Global Communications Conference. :1–5.
The discovery of adversarial examples poses a serious risk to the deep neural networks (DNN). By adding a subtle perturbation that is imperceptible to the human eye, a well-behaved DNN model can be easily fooled and completely change the prediction categories of the input samples. However, research on adversarial attacks in the field of modulation recognition mainly focuses on increasing the prediction error of the classifier, while ignores the importance of decreasing the perceptual invisibility of attack. Aiming at the task of DNNbased modulation recognition, this study designs the Fitting Difference as a metric to measure the perturbed waveforms and proposes a new method: the Nesterov Adam Iterative Method to generate adversarial examples. We show that the proposed algorithm not only exerts excellent white-box attacks but also can initiate attacks on a black-box model. Moreover, our method decreases the waveform perceptual invisibility of attacks to a certain degree, thereby reducing the risk of an attack being detected.
Zhong, Zhenyu, Hu, Zhisheng, Chen, Xiaowei.  2020.  Quantifying DNN Model Robustness to the Real-World Threats. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN). :150–157.
DNN models have suffered from adversarial example attacks, which lead to inconsistent prediction results. As opposed to the gradient-based attack, which assumes white-box access to the model by the attacker, we focus on more realistic input perturbations from the real-world and their actual impact on the model robustness without any presence of the attackers. In this work, we promote a standardized framework to quantify the robustness against real-world threats. It is composed of a set of safety properties associated with common violations, a group of metrics to measure the minimal perturbation that causes the offense, and various criteria that reflect different aspects of the model robustness. By revealing comparison results through this framework among 13 pre-trained ImageNet classifiers, three state-of-the-art object detectors, and three cloud-based content moderators, we deliver the status quo of the real-world model robustness. Beyond that, we provide robustness benchmarking datasets for the community.
Chen, Jianbo, Jordan, Michael I., Wainwright, Martin J..  2020.  HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. 2020 IEEE Symposium on Security and Privacy (SP). :1277–1294.
The goal of a decision-based adversarial attack on a trained model is to generate adversarial examples based solely on observing output labels returned by the targeted model. We develop HopSkipJumpAttack, a family of algorithms based on a novel estimate of the gradient direction using binary information at the decision boundary. The proposed family includes both untargeted and targeted attacks optimized for $\mathscrl$ and $\mathscrlınfty$ similarity metrics respectively. Theoretical analysis is provided for the proposed algorithms and the gradient direction estimate. Experiments show HopSkipJumpAttack requires significantly fewer model queries than several state-of-the-art decision-based adversarial attacks. It also achieves competitive performance in attacking several widely-used defense mechanisms.
2021-10-04
Liu, Yuan, Zhou, Pingqiang.  2020.  Defending Against Adversarial Attacks in Deep Learning with Robust Auxiliary Classifiers Utilizing Bit Plane Slicing. 2020 Asian Hardware Oriented Security and Trust Symposium (AsianHOST). :1–4.
Deep Neural Networks (DNNs) have been widely used in variety of fields with great success. However, recent researches indicate that DNNs are susceptible to adversarial attacks, which can easily fool the well-trained DNNs without being detected by human eyes. In this paper, we propose to combine the target DNN model with robust bit plane classifiers to defend against adversarial attacks. It comes from our finding that successful attacks generate imperceptible perturbations, which mainly affects the low-order bits of pixel value in clean images. Hence, using bit planes instead of traditional RGB channels for convolution can effectively reduce channel modification rate. We conduct experiments on dataset CIFAR-10 and GTSRB. The results show that our defense method can effectively increase the model accuracy on average from 8.72% to 85.99% under attacks on CIFAR-10 without sacrificina accuracy of clean images.
2021-08-31
Di Noia, Tommaso, Malitesta, Daniele, Merra, Felice Antonio.  2020.  TAaMR: Targeted Adversarial Attack against Multimedia Recommender Systems. 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :1–8.
Deep learning classifiers are hugely vulnerable to adversarial examples, and their existence raised cybersecurity concerns in many tasks with an emphasis on malware detection, computer vision, and speech recognition. While there is a considerable effort to investigate attacks and defense strategies in these tasks, only limited work explores the influence of targeted attacks on input data (e.g., images, textual descriptions, audio) used in multimedia recommender systems (MR). In this work, we examine the consequences of applying targeted adversarial attacks against the product images of a visual-based MR. We propose a novel adversarial attack approach, called Target Adversarial Attack against Multimedia Recommender Systems (TAaMR), to investigate the modification of MR behavior when the images of a category of low recommended products (e.g., socks) are perturbed to misclassify the deep neural classifier towards the class of more recommended products (e.g., running shoes) with human-level slight images alterations. We explore the TAaMR approach studying the effect of two targeted adversarial attacks (i.e., FGSM and PGD) against input pictures of two state-of-the-art MR (i.e., VBPR and AMR). Extensive experiments on two real-world recommender fashion datasets confirmed the effectiveness of TAaMR in terms of recommendation lists changing while keeping the original human judgment on the perturbed images.
2021-08-02
Bouniot, Quentin, Audigier, Romaric, Loesch, Angélique.  2020.  Vulnerability of Person Re-Identification Models to Metric Adversarial Attacks. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). :3450—3459.
Person re-identification (re-ID) is a key problem in smart supervision of camera networks. Over the past years, models using deep learning have become state of the art. However, it has been shown that deep neural networks are flawed with adversarial examples, i.e. human-imperceptible perturbations. Extensively studied for the task of image closed- set classification, this problem can also appear in the case of open-set retrieval tasks. Indeed, recent work has shown that we can also generate adversarial examples for metric learning systems such as re-ID ones. These models remain vulnerable: when faced with adversarial examples, they fail to correctly recognize a person, which represents a security breach. These attacks are all the more dangerous as they are impossible to detect for a human operator. Attacking a metric consists in altering the distances between the feature of an attacked image and those of reference images, i.e. guides. In this article, we investigate different possible attacks depending on the number and type of guides available. From this metric attack family, two particularly effective attacks stand out. The first one, called Self Metric Attack, is a strong attack that does not need any image apart from the attacked image. The second one, called FurthestNegative Attack, makes full use of a set of images. Attacks are evaluated on commonly used datasets: Market1501 and DukeMTMC. Finally, we propose an efficient extension of adversarial training protocol adapted to metric learning as a defense that increases the robustness of re-ID models.1
Peng, Ye, Fu, Guobin, Luo, Yingguang, Yu, Qi, Li, Bin, Hu, Jia.  2020.  A Two-Layer Moving Target Defense for Image Classification in Adversarial Environment. 2020 IEEE 6th International Conference on Computer and Communications (ICCC). :410—414.
Deep learning plays an increasingly important role in various fields due to its superior performance, and it also achieves advanced recognition performance in the field of image classification. However, the vulnerability of deep learning in the adversarial environment cannot be ignored, and the prediction result of the model is likely to be affected by the small perturbations added to the samples by the adversary. In this paper, we propose a two-layer dynamic defense method based on defensive techniques pool and retrained branch model pool. First, we randomly select defense methods from the defense pool to process the input. The perturbation ability of the adversarial samples preprocessed by different defense methods changed, which would produce different classification results. In addition, we conduct adversarial training based on the original model and dynamically generate multiple branch models. The classification results of these branch models for the same adversarial sample is inconsistent. We can detect the adversarial samples by using the inconsistencies in the output results of the two layers. The experimental results show that the two-layer dynamic defense method we designed achieves a good defense effect.
2021-07-27
Bao, Zhida, Zhao, Haojun.  2020.  Evaluation of Adversarial Attacks Based on DL in Communication Networks. 2020 7th International Conference on Dependable Systems and Their Applications (DSA). :251–252.
Deep Neural Networks (DNN) have strong capabilities of memories, feature identifications and automatic analyses, solving various complex problems. However, DNN classifiers have obvious fragility that adding several unnoticeable perturbations to the original examples will lead to the errors in the classifier identification. In the field of communications, the adversarial examples will greatly reduce the accuracy of the signal identification, causing great information security risks. Considering the adversarial examples pose a serious threat to the security of the DNN models, studying their generation mechanisms and testing their attack effects are critical to ensuring the information security of the communication networks. This paper will study the generation of the adversarial examples and the influences of the adversarial examples on the accuracy of the DNN-based communication signal identification. Meanwhile, this paper will study the influences of the adversarial examples under the white-box models and black-box models, and explore the adversarial attack influences of the factors such as perturbation levels and iterative steps. The insights of this study would be helpful for ensuring the security of information networks and designing robust DNN communication networks.
Xu, Jiahui, Wang, Chen, Li, Tingting, Xiang, Fengtao.  2020.  Improved Adversarial Attack against Black-box Machine Learning Models. 2020 Chinese Automation Congress (CAC). :5907–5912.
The existence of adversarial samples makes the security of machine learning models in practical application questioned, especially the black-box adversarial attack, which is very close to the actual application scenario. Efficient search for black-box attack samples is helpful to train more robust models. We discuss the situation that the attacker can get nothing except the final predict label. As for this problem, the current state-of-the-art method is Boundary Attack(BA) and its variants, such as Biased Boundary Attack(BBA), however it still requires large number of queries and kills a lot of time. In this paper, we propose a novel method to solve these shortcomings. First, we improved the algorithm for generating initial adversarial samples with smaller L2 distance. Second, we innovatively combine a swarm intelligence algorithm - Particle Swarm Optimization(PSO) with Biased Boundary Attack and propose PSO-BBA method. Finally, we experiment on ImageNet dataset, and compared our algorithm with the baseline algorithm. The results show that:(1)our improved initial point selection algorithm effectively reduces the number of queries;(2)compared with the most advanced methods, our PSO-BBA method improves the convergence speed while ensuring the attack accuracy;(3)our method has a good effect on both targeted attack and untargeted attack.
2021-07-08
Chaturvedi, Amit Kumar, Chahar, Meetendra Singh, Sharma, Kalpana.  2020.  Proposing Innovative Perturbation Algorithm for Securing Portable Data on Cloud Servers. 2020 9th International Conference System Modeling and Advancement in Research Trends (SMART). :360—364.
Cloud computing provides an open architecture and resource sharing computing platform with pay-per-use model. It is now a popular computing platform and most of the new internet based computing services are on this innovation supported environment. We consider it as innovation supported because developers are more focused here on the service design, rather on arranging the infrastructure, network, management of the resources, etc. These all things are available in cloud computing on hired basis. Now, a big question arises here is the security of data or privacy of data because the service provider is already using the infrastructure, network, storage, processors, and other more resources from the third party. So, the security or privacy of the portable user's data is the main motivation for writing this research paper. In this paper, we are proposing an innovative perturbation algorithm MAP() to secure the portable user's data on the cloud server.
2021-06-30
DelVecchio, Matthew, Flowers, Bryse, Headley, William C..  2020.  Effects of Forward Error Correction on Communications Aware Evasion Attacks. 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications. :1—7.
Recent work has shown the impact of adversarial machine learning on deep neural networks (DNNs) developed for Radio Frequency Machine Learning (RFML) applications. While these attacks have been shown to be successful in disrupting the performance of an eavesdropper, they fail to fully support the primary goal of successful intended communication. To remedy this, a communications-aware attack framework was recently developed that allows for a more effective balance between the opposing goals of evasion and intended communication through the novel use of a DNN to intelligently create the adversarial communication signal. Given the near ubiquitous usage of for-ward error correction (FEC) coding in the majority of deployed systems to correct errors that arise, incorporating FEC in this framework is a natural extension of this prior work and will allow for improved performance in more adverse environments. This work therefore provides contributions to the framework through improved loss functions and design considerations to incorporate inherent knowledge of the usage of FEC codes within the transmitted signal. Performance analysis shows that FEC coding improves the communications aware adversarial attack even if no explicit knowledge of the coding scheme is assumed and allows for improved performance over the prior art in balancing the opposing goals of evasion and intended communications.
Biroon, Roghieh A., Pisu, Pierluigi, Abdollahi, Zoleikha.  2020.  Real-time False Data Injection Attack Detection in Connected Vehicle Systems with PDE modeling. 2020 American Control Conference (ACC). :3267—3272.
Connected vehicles as a promising concept of Intelligent Transportation System (ITS), are a potential solution to address some of the existing challenges of emission, traffic congestion as well as fuel consumption. To achieve these goals, connectivity among vehicles through the wireless communication network is essential. However, vehicular communication networks endure from reliability and security issues. Cyber-attacks with purposes of disrupting the performance of the connected vehicles, lead to catastrophic collision and traffic congestion. In this study, we consider a platoon of connected vehicles equipped with Cooperative Adaptive Cruise Control (CACC) which are subjected to a specific type of cyber-attack namely "False Data Injection" attack. We developed a novel method to model the attack with ghost vehicles injected into the connected vehicles network to disrupt the performance of the whole system. To aid the analysis, we use a Partial Differential Equation (PDE) model. Furthermore, we present a PDE model-based diagnostics scheme capable of detecting the false data injection attack and isolating the injection point of the attack in the platoon system. The proposed scheme is designed based on a PDE observer with measured velocity and acceleration feedback. Lyapunov stability theory has been utilized to verify the analytically convergence of the observer under no attack scenario. Eventually, the effectiveness of the proposed algorithm is evaluated with simulation study.
2021-06-28
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Mehmet Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
2021-06-02
Xiong, Yi, Li, Zhongkui.  2020.  Privacy Preserving Average Consensus by Adding Edge-based Perturbation Signals. 2020 IEEE Conference on Control Technology and Applications (CCTA). :712—717.
In this paper, the privacy preserving average consensus problem of multi-agent systems with strongly connected and weight balanced graph is considered. In most existing consensus algorithms, the agents need to exchange their state information, which leads to the disclosure of their initial states. This might be undesirable because agents' initial states may contain some important and sensitive information. To solve the problem, we propose a novel distributed algorithm, which can guarantee average consensus and meanwhile preserve the agents' privacy. This algorithm assigns some additive perturbation signals on the communication edges and these perturbations signals will be added to original true states for information exchanging. This ensures that direct disclosure of initial states can be avoided. Then a rigid analysis of our algorithm's privacy preserving performance is provided. For any individual agent in the network, we present a necessary and sufficient condition under which its privacy is preserved. The effectiveness of our algorithm is demonstrated by a numerical simulation.