Visible to the public Biblio

Filters: Keyword is Adversary Models  [Clear All Filters]
2021-12-20
Ebrahimabadi, Mohammad, Younis, Mohamed, Lalouani, Wassila, Karimi, Naghmeh.  2021.  A Novel Modeling-Attack Resilient Arbiter-PUF Design. 2021 34th International Conference on VLSI Design and 2021 20th International Conference on Embedded Systems (VLSID). :123–128.
Physically Unclonable Functions (PUFs) have been considered as promising lightweight primitives for random number generation and device authentication. Thanks to the imperfections occurring during the fabrication process of integrated circuits, each PUF generates a unique signature which can be used for chip identification. Although supposed to be unclonable, PUFs have been shown to be vulnerable to modeling attacks where a set of collected challenge response pairs are used for training a machine learning model to predict the PUF response to unseen challenges. Challenge obfuscation has been proposed to tackle the modeling attacks in recent years. However, knowing the obfuscation algorithm can help the adversary to model the PUF. This paper proposes a modeling-resilient arbiter-PUF architecture that benefits from the randomness provided by PUFs in concealing the obfuscation scheme. The experimental results confirm the effectiveness of the proposed structure in countering PUF modeling attacks.
Sahay, Rajeev, Brinton, Christopher G., Love, David J..  2021.  Frequency-based Automated Modulation Classification in the Presence of Adversaries. ICC 2021 - IEEE International Conference on Communications. :1–6.
Automatic modulation classification (AMC) aims to improve the efficiency of crowded radio spectrums by automatically predicting the modulation constellation of wireless RF signals. Recent work has demonstrated the ability of deep learning to achieve robust AMC performance using raw in-phase and quadrature (IQ) time samples. Yet, deep learning models are highly susceptible to adversarial interference, which cause intelligent prediction models to misclassify received samples with high confidence. Furthermore, adversarial interference is often transferable, allowing an adversary to attack multiple deep learning models with a single perturbation crafted for a particular classification network. In this work, we present a novel receiver architecture consisting of deep learning models capable of withstanding transferable adversarial interference. Specifically, we show that adversarial attacks crafted to fool models trained on time-domain features are not easily transferable to models trained using frequency-domain features. In this capacity, we demonstrate classification performance improvements greater than 30% on recurrent neural networks (RNNs) and greater than 50% on convolutional neural networks (CNNs). We further demonstrate our frequency feature-based classification models to achieve accuracies greater than 99% in the absence of attacks.
Masuda, Hiroki, Kita, Kentaro, Koizumi, Yuki, Takemasa, Junji, Hasegawa, Toru.  2021.  Model Fragmentation, Shuffle and Aggregation to Mitigate Model Inversion in Federated Learning. 2021 IEEE International Symposium on Local and Metropolitan Area Networks (LANMAN). :1–6.
Federated learning is a privacy-preserving learning system where participants locally update a shared model with their own training data. Despite the advantage that training data are not sent to a server, there is still a risk that a state-of-the-art model inversion attack, which may be conducted by the server, infers training data from the models updated by the participants, referred to as individual models. A solution to prevent such attacks is differential privacy, where each participant adds noise to the individual model before sending it to the server. Differential privacy, however, sacrifices the quality of the shared model in compensation for the fact that participants' training data are not leaked. This paper proposes a federated learning system that is resistant to model inversion attacks without sacrificing the quality of the shared model. The core idea is that each participant divides the individual model into model fragments, shuffles, and aggregates them to prevent adversaries from inferring training data. The other benefit of the proposed system is that the resulting shared model is identical to the shared model generated with the naive federated learning.
Luo, Xinjian, Wu, Yuncheng, Xiao, Xiaokui, Ooi, Beng Chin.  2021.  Feature Inference Attack on Model Predictions in Vertical Federated Learning. 2021 IEEE 37th International Conference on Data Engineering (ICDE). :181–192.
Federated learning (FL) is an emerging paradigm for facilitating multiple organizations' data collaboration without revealing their private data to each other. Recently, vertical FL, where the participating organizations hold the same set of samples but with disjoint features and only one organization owns the labels, has received increased attention. This paper presents several feature inference attack methods to investigate the potential privacy leakages in the model prediction stage of vertical FL. The attack methods consider the most stringent setting that the adversary controls only the trained vertical FL model and the model predictions, relying on no background information of the attack target's data distribution. We first propose two specific attacks on the logistic regression (LR) and decision tree (DT) models, according to individual prediction output. We further design a general attack method based on multiple prediction outputs accumulated by the adversary to handle complex models, such as neural networks (NN) and random forest (RF) models. Experimental evaluations demonstrate the effectiveness of the proposed attacks and highlight the need for designing private mechanisms to protect the prediction outputs in vertical FL.
Khorasgani, Hamidreza Amini, Maji, Hemanta K., Wang, Mingyuan.  2021.  Optimally-secure Coin-tossing against a Byzantine Adversary. 2021 IEEE International Symposium on Information Theory (ISIT). :2858–2863.
Ben-Or and Linial (1985) introduced the full information model for coin-tossing protocols involving \$n\$ processors with unbounded computational power using a common broadcast channel for all their communications. For most adversarial settings, the characterization of the exact or asymptotically optimal protocols remains open. Furthermore, even for the settings where near-optimal asymptotic constructions are known, the exact constants or poly-logarithmic multiplicative factors involved are not entirely well-understood. This work studies \$n\$-processor coin-tossing protocols where every processor broadcasts an arbitrary-length message once. An adaptive Byzantine adversary, based on the messages broadcast so far, can corrupt \$k=1\$ processor. A bias-\$X\$ coin-tossing protocol outputs 1 with probability \$X\$; otherwise, it outputs 0 with probability (\$1-X\$). A coin-tossing protocol's insecurity is the maximum change in the output distribution (in the statistical distance) that a Byzantine adversary can cause. Our objective is to identify bias-\$X\$ coin-tossing protocols achieving near-optimal minimum insecurity for every \$Xın[0,1]\$. Lichtenstein, Linial, and Saks (1989) studied bias-\$X\$ coin-tossing protocols in this adversarial model where each party broadcasts an independent and uniformly random bit. They proved that the elegant “threshold coin-tossing protocols” are optimal for all \$n\$ and \$k\$. Furthermore, Goldwasser, Kalai, and Park (2015), Kalai, Komargodski, and Raz (2018), and Haitner and Karidi-Heller (2020) prove that \$k=\textbackslashtextbackslashmathcalO(\textbackslashtextbackslashsqrtn \textbackslashtextbackslashcdot \textbackslashtextbackslashmathsfpolylog(n)\$) corruptions suffice to fix the output of any bias-\$X\$ coin-tossing protocol. These results encompass parties who send arbitrary-length messages, and each processor has multiple turns to reveal its entire message. We use an inductive approach to constructing coin-tossing protocols using a potential function as a proxy for measuring any bias-\$X\$ coin-tossing protocol's susceptibility to attacks in our adversarial model. Our technique is inherently constructive and yields protocols that minimize the potential function. It is incidentally the case that the threshold protocols minimize the potential function, even for arbitrary-length messages. We demonstrate that these coin-tossing protocols' insecurity is a 2-approximation of the optimal protocol in our adversarial model. For any other \$Xın[0,1]\$ that threshold protocols cannot realize, we prove that an appropriate (convex) combination of the threshold protocols is a 4-approximation of the optimal protocol. Finally, these results entail new (vertex) isoperimetric inequalities for density-\$X\$ subsets of product spaces of arbitrary-size alphabets.
Künnemann, Robert, Garg, Deepak, Backes, Michael.  2021.  Accountability in the Decentralised-Adversary Setting. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1–16.
A promising paradigm in protocol design is to hold parties accountable for misbehavior, instead of postulating that they are trustworthy. Recent approaches in defining this property, called accountability, characterized malicious behavior as a deviation from the protocol that causes a violation of the desired security property, but did so under the assumption that all deviating parties are controlled by a single, centralized adversary. In this work, we investigate the setting where multiple parties can deviate with or without coordination in a variant of the applied-π calculus.We first demonstrate that, under realistic assumptions, it is impossible to determine all misbehaving parties; however, we show that accountability can be relaxed to exclude causal dependencies that arise from the behavior of deviating parties, and not from the protocol as specified. We map out the design space for the relaxation, point out protocol classes separating these notions and define conditions under which we can guarantee fairness and completeness. Most importantly, we discover under which circumstances it is correct to consider accountability in the single-adversary setting, where this property can be verified with off-the-shelf protocol verification tools.
Nasr, Milad, Songi, Shuang, Thakurta, Abhradeep, Papemoti, Nicolas, Carlin, Nicholas.  2021.  Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. 2021 IEEE Symposium on Security and Privacy (SP). :866–882.
Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage. DP formalizes this data leakage through a cryptographic game, where an adversary must predict if a model was trained on a dataset D, or a dataset D′ that differs in just one example. If observing the training algorithm does not meaningfully increase the adversary's odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially private. Hence, the purpose of privacy analysis is to upper bound the probability that any adversary could successfully guess which dataset the model was trained on.In our paper, we instantiate this hypothetical adversary in order to establish lower bounds on the probability that this distinguishing game can be won. We use this adversary to evaluate the importance of the adversary capabilities allowed in the privacy analysis of DP training algorithms.For DP-SGD, the most common method for training neural networks with differential privacy, our lower bounds are tight and match the theoretical upper bound. This implies that in order to prove better upper bounds, it will be necessary to make use of additional assumptions. Fortunately, we find that our attacks are significantly weaker when additional (realistic) restrictions are put in place on the adversary's capabilities. Thus, in the practical setting common to many real-world deployments, there is a gap between our lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as suggested by the theoretical bound.
Suresh, Vinayak, Ruzomberka, Eric, Love, David J..  2021.  Stochastic-Adversarial Channels: Online Adversaries With Feedback Snooping. 2021 IEEE International Symposium on Information Theory (ISIT). :497–502.
The growing need for reliable communication over untrusted networks has caused a renewed interest in adversarial channel models, which often behave much differently than traditional stochastic channel models. Of particular practical use is the assumption of a causal or online adversary who is limited to causal knowledge of the transmitted codeword. In this work, we consider stochastic-adversarial mixed noise models. In the setup considered, a transmit node (Alice) attempts to communicate with a receive node (Bob) over a binary erasure channel (BEC) or binary symmetric channel (BSC) in the presence of an online adversary (Calvin) who can erase or flip up to a certain number of bits at the input of the channel. Calvin knows the encoding scheme and has strict causal access to Bob's reception through feedback snooping. For erasures, we provide a complete capacity characterization with and without transmitter feedback. For bit-flips, we provide converse and achievability bounds.
Buccafurri, Francesco, De Angelis, Vincenzo, Idone, Maria Francesca, Labrini, Cecilia.  2021.  A Distributed Location Trusted Service Achieving k-Anonymity against the Global Adversary. 2021 22nd IEEE International Conference on Mobile Data Management (MDM). :133–138.
When location-based services (LBS) are delivered, location data should be protected against honest-but-curious LBS providers, them being quasi-identifiers. One of the existing approaches to achieving this goal is location k-anonymity, which leverages the presence of a trusted party, called location trusted service (LTS), playing the role of anonymizer. A drawback of this approach is that the location trusted service is a single point of failure and traces all the users. Moreover, the protection is completely nullified if a global passive adversary is allowed, able to monitor the flow of messages, as the source of the query can be identified despite location k-anonymity. In this paper, we propose a distributed and hierarchical LTS model, overcoming both the above drawbacks. Moreover, position notification is used as cover traffic to hide queries and multicast is minimally adopted to hide responses, to keep k-anonymity also against the global adversary, thus enabling the possibility that LBS are delivered within social networks.
Janapriya, N., Anuradha, K., Srilakshmi, V..  2021.  Adversarial Deep Learning Models With Multiple Adversaries. 2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA). :522–525.
Adversarial machine learning calculations handle adversarial instance age, producing bogus data information with the ability to fool any machine learning model. As the word implies, “foe” refers to a rival, whereas “rival” refers to a foe. In order to strengthen the machine learning models, this section discusses about the weakness of machine learning models and how effectively the misinterpretation occurs during the learning cycle. As definite as it is, existing methods such as creating adversarial models and devising powerful ML computations, frequently ignore semantics and the general skeleton including ML section. This research work develops an adversarial learning calculation by considering the coordinated portrayal by considering all the characteristics and Convolutional Neural Networks (CNN) explicitly. Figuring will most likely express minimal adjustments via data transport represented over positive and negative class markings, as well as a specific subsequent data flow misclassified by CNN. The final results recommend a certain game theory and formative figuring, which obtain incredible favored ensuring about significant learning models against the execution of shortcomings, which are reproduced as attack circumstances against various adversaries.
2021-01-28
Pham, L. H., Albanese, M., Chadha, R., Chiang, C.-Y. J., Venkatesan, S., Kamhoua, C., Leslie, N..  2020.  A Quantitative Framework to Model Reconnaissance by Stealthy Attackers and Support Deception-Based Defenses. :1—9.

In recent years, persistent cyber adversaries have developed increasingly sophisticated techniques to evade detection. Once adversaries have established a foothold within the target network, using seemingly-limited passive reconnaissance techniques, they can develop significant network reconnaissance capabilities. Cyber deception has been recognized as a critical capability to defend against such adversaries, but, without an accurate model of the adversary's reconnaissance behavior, current approaches are ineffective against advanced adversaries. To address this gap, we propose a novel model to capture how advanced, stealthy adversaries acquire knowledge about the target network and establish and expand their foothold within the system. This model quantifies the cost and reward, from the adversary's perspective, of compromising and maintaining control over target nodes. We evaluate our model through simulations in the CyberVAN testbed, and indicate how it can guide the development and deployment of future defensive capabilities, including high-interaction honeypots, so as to influence the behavior of adversaries and steer them away from critical resources.

Kariyappa, S., Qureshi, M. K..  2020.  Defending Against Model Stealing Attacks With Adaptive Misinformation. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). :767—775.

Deep Neural Networks (DNNs) are susceptible to model stealing attacks, which allows a data-limited adversary with no knowledge of the training dataset to clone the functionality of a target model, just by using black-box query access. Such attacks are typically carried out by querying the target model using inputs that are synthetically generated or sampled from a surrogate dataset to construct a labeled dataset. The adversary can use this labeled dataset to train a clone model, which achieves a classification accuracy comparable to that of the target model. We propose "Adaptive Misinformation" to defend against such model stealing attacks. We identify that all existing model stealing attacks invariably query the target model with Out-Of-Distribution (OOD) inputs. By selectively sending incorrect predictions for OOD queries, our defense substantially degrades the accuracy of the attacker's clone model (by up to 40%), while minimally impacting the accuracy (\textbackslashtextless; 0.5%) for benign users. Compared to existing defenses, our defense has a significantly better security vs accuracy trade-off and incurs minimal computational overhead.

Collins, B. C., Brown, P. N..  2020.  Exploiting an Adversary’s Intentions in Graphical Coordination Games. 2020 American Control Conference (ACC). :4638—4643.

How does information regarding an adversary's intentions affect optimal system design? This paper addresses this question in the context of graphical coordination games where an adversary can indirectly influence the behavior of agents by modifying their payoffs. We study a situation in which a system operator must select a graph topology in anticipation of the action of an unknown adversary. The designer can limit her worst-case losses by playing a security strategy, effectively planning for an adversary which intends maximum harm. However, fine-grained information regarding the adversary's intention may help the system operator to fine-tune the defenses and obtain better system performance. In a simple model of adversarial behavior, this paper asks how much a system operator can gain by fine-tuning a defense for known adversarial intent. We find that if the adversary is weak, a security strategy is approximately optimal for any adversary type; however, for moderately-strong adversaries, security strategies are far from optimal.

Ganji, F., Amir, S., Tajik, S., Forte, D., Seifert, J.-P..  2020.  Pitfalls in Machine Learning-based Adversary Modeling for Hardware Systems. 2020 Design, Automation Test in Europe Conference Exhibition (DATE). :514—519.

The concept of the adversary model has been widely applied in the context of cryptography. When designing a cryptographic scheme or protocol, the adversary model plays a crucial role in the formalization of the capabilities and limitations of potential attackers. These models further enable the designer to verify the security of the scheme or protocol under investigation. Although being well established for conventional cryptanalysis attacks, adversary models associated with attackers enjoying the advantages of machine learning techniques have not yet been developed thoroughly. In particular, when it comes to composed hardware, often being security-critical, the lack of such models has become increasingly noticeable in the face of advanced, machine learning-enabled attacks. This paper aims at exploring the adversary models from the machine learning perspective. In this regard, we provide examples of machine learning-based attacks against hardware primitives, e.g., obfuscation schemes and hardware root-of-trust, claimed to be infeasible. We demonstrate that this assumption becomes however invalid as inaccurate adversary models have been considered in the literature.

Nweke, L. O., Weldehawaryat, G. Kahsay, Wolthusen, S. D..  2020.  Adversary Model for Attacks Against IEC 61850 Real-Time Communication Protocols. 2020 16th International Conference on the Design of Reliable Communication Networks DRCN 2020. :1—8.

Adversarial models are well-established for cryptographic protocols, but distributed real-time protocols have requirements that these abstractions are not intended to cover. The IEEE/IEC 61850 standard for communication networks and systems for power utility automation in particular not only requires distributed processing, but in case of the generic object oriented substation events and sampled value (GOOSE/SV) protocols also hard real-time characteristics. This motivates the desire to include both quality of service (QoS) and explicit network topology in an adversary model based on a π-calculus process algebraic formalism based on earlier work. This allows reasoning over process states, placement of adversarial entities and communication behaviour. We demonstrate the use of our model for the simple case of a replay attack against the publish/subscribe GOOSE/SV subprotocol, showing bounds for non-detectability of such an attack.

Bhattacharya, A., Ramachandran, T., Banik, S., Dowling, C. P., Bopardikar, S. D..  2020.  Automated Adversary Emulation for Cyber-Physical Systems via Reinforcement Learning. 2020 IEEE International Conference on Intelligence and Security Informatics (ISI). :1—6.

Adversary emulation is an offensive exercise that provides a comprehensive assessment of a system’s resilience against cyber attacks. However, adversary emulation is typically a manual process, making it costly and hard to deploy in cyber-physical systems (CPS) with complex dynamics, vulnerabilities, and operational uncertainties. In this paper, we develop an automated, domain-aware approach to adversary emulation for CPS. We formulate a Markov Decision Process (MDP) model to determine an optimal attack sequence over a hybrid attack graph with cyber (discrete) and physical (continuous) components and related physical dynamics. We apply model-based and model-free reinforcement learning (RL) methods to solve the discrete-continuous MDP in a tractable fashion. As a baseline, we also develop a greedy attack algorithm and compare it with the RL procedures. We summarize our findings through a numerical study on sensor deception attacks in buildings to compare the performance and solution quality of the proposed algorithms.

Beemer, A., Graves, E., Kliewer, J., Kosut, O., Yu, P..  2020.  Authentication with Mildly Myopic Adversaries. 2020 IEEE International Symposium on Information Theory (ISIT). :984—989.

In unsecured communications settings, ascertaining the trustworthiness of received information, called authentication, is paramount. We consider keyless authentication over an arbitrarily-varying channel, where channel states are chosen by a malicious adversary with access to noisy versions of transmitted sequences. We have shown previously that a channel condition termed U-overwritability is a sufficient condition for zero authentication capacity over such a channel, and also that with a deterministic encoder, a sufficiently clear-eyed adversary is essentially omniscient. In this paper, we show that even if the authentication capacity with a deterministic encoder and an essentially omniscient adversary is zero, allowing a stochastic encoder can result in a positive authentication capacity. Furthermore, the authentication capacity with a stochastic encoder can be equal to the no-adversary capacity of the underlying channel in this case. We illustrate this for a binary channel model, which provides insight into the more general case.

Seiler, M., Trautmann, H., Kerschke, P..  2020.  Enhancing Resilience of Deep Learning Networks By Means of Transferable Adversaries. 2020 International Joint Conference on Neural Networks (IJCNN). :1—8.

Artificial neural networks in general and deep learning networks in particular established themselves as popular and powerful machine learning algorithms. While the often tremendous sizes of these networks are beneficial when solving complex tasks, the tremendous number of parameters also causes such networks to be vulnerable to malicious behavior such as adversarial perturbations. These perturbations can change a model's classification decision. Moreover, while single-step adversaries can easily be transferred from network to network, the transfer of more powerful multi-step adversaries has - usually - been rather difficult.In this work, we introduce a method for generating strong adversaries that can easily (and frequently) be transferred between different models. This method is then used to generate a large set of adversaries, based on which the effects of selected defense methods are experimentally assessed. At last, we introduce a novel, simple, yet effective approach to enhance the resilience of neural networks against adversaries and benchmark it against established defense methods. In contrast to the already existing methods, our proposed defense approach is much more efficient as it only requires a single additional forward-pass to achieve comparable performance results.

Wang, W., Tang, B., Zhu, C., Liu, B., Li, A., Ding, Z..  2020.  Clustering Using a Similarity Measure Approach Based on Semantic Analysis of Adversary Behaviors. 2020 IEEE Fifth International Conference on Data Science in Cyberspace (DSC). :1—7.

Rapidly growing shared information for threat intelligence not only helps security analysts reduce time on tracking attacks, but also bring possibilities to research on adversaries' thinking and decisions, which is important for the further analysis of attackers' habits and preferences. In this paper, we analyze current models and frameworks used in threat intelligence that suited to different modeling goals, and propose a three-layer model (Goal, Behavior, Capability) to study the statistical characteristics of APT groups. Based on the proposed model, we construct a knowledge network composed of adversary behaviors, and introduce a similarity measure approach to capture similarity degree by considering different semantic links between groups. After calculating similarity degrees, we take advantage of Girvan-Newman algorithm to discover community groups, clustering result shows that community structures and boundaries do exist by analyzing the behavior of APT groups.

Drašar, M., Moskal, S., Yang, S., Zat'ko, P..  2020.  Session-level Adversary Intent-Driven Cyberattack Simulator. 2020 IEEE/ACM 24th International Symposium on Distributed Simulation and Real Time Applications (DS-RT). :1—9.

Recognizing the need for proactive analysis of cyber adversary behavior, this paper presents a new event-driven simulation model and implementation to reveal the efforts needed by attackers who have various entry points into a network. Unlike previous models which focus on the impact of attackers' actions on the defender's infrastructure, this work focuses on the attackers' strategies and actions. By operating on a request-response session level, our model provides an abstraction of how the network infrastructure reacts to access credentials the adversary might have obtained through a variety of strategies. We present the current capabilities of the simulator by showing three variants of Bronze Butler APT on a network with different user access levels.

2020-08-03
Juuti, Mika, Szyller, Sebastian, Marchal, Samuel, Asokan, N..  2019.  PRADA: Protecting Against DNN Model Stealing Attacks. 2019 IEEE European Symposium on Security and Privacy (EuroS P). :512–527.
Machine learning (ML) applications are increasingly prevalent. Protecting the confidentiality of ML models becomes paramount for two reasons: (a) a model can be a business advantage to its owner, and (b) an adversary may use a stolen model to find transferable adversarial examples that can evade classification by the original model. Access to the model can be restricted to be only via well-defined prediction APIs. Nevertheless, prediction APIs still provide enough information to allow an adversary to mount model extraction attacks by sending repeated queries via the prediction API. In this paper, we describe new model extraction attacks using novel approaches for generating synthetic queries, and optimizing training hyperparameters. Our attacks outperform state-of-the-art model extraction in terms of transferability of both targeted and non-targeted adversarial examples (up to +29-44 percentage points, pp), and prediction accuracy (up to +46 pp) on two datasets. We provide take-aways on how to perform effective model extraction attacks. We then propose PRADA, the first step towards generic and effective detection of DNN model extraction attacks. It analyzes the distribution of consecutive API queries and raises an alarm when this distribution deviates from benign behavior. We show that PRADA can detect all prior model extraction attacks with no false positives.
Parmar, Manisha, Domingo, Alberto.  2019.  On the Use of Cyber Threat Intelligence (CTI) in Support of Developing the Commander's Understanding of the Adversary. MILCOM 2019 - 2019 IEEE Military Communications Conference (MILCOM). :1–6.
Cyber Threat Intelligence (CTI) is a rapidly developing field which has evolved in direct response to exponential growth in cyber related crimes and attacks. CTI supports Communication and Information System (CIS)Security in order to bolster defenses and aids in the development of threat models that inform an organization's decision making process. In a military organization like NATO, CTI additionally supports Cyberspace Operations by providing the Commander with essential intelligence about the adversary, their capabilities and objectives while operating in and through cyberspace. There have been many contributions to the CTI field; a noteworthy contribution is the ATT&CK® framework by the Mitre Corporation. ATT&CK® contains a comprehensive list of adversary tactics and techniques linked to custom or publicly known Advanced Persistent Threats (APT) which aids an analyst in the characterization of Indicators of Compromise (IOCs). The ATT&CK® framework also demonstrates possibility of supporting an organization with linking observed tactics and techniques to specific APT behavior, which may assist with adversary characterization and identification, necessary steps towards attribution. The NATO Allied Command Transformation (ACT) and the NATO Communication and Information Agency (NCI Agency) have been experimenting with the use of deception techniques (including decoys) to increase the collection of adversary related data. The collected data is mapped to the tactics and techniques described in the ATT&CK® framework, in order to derive evidence to support adversary characterization; this intelligence is pivotal for the Commander to support mission planning and determine the best possible multi-domain courses of action. This paper describes the approach, methodology, outcomes and next steps for the conducted experiments.
Xiong, Chen, Chen, Hua, Cai, Ming, Gao, Jing.  2019.  A Vehicle Trajectory Adversary Model Based on VLPR Data. 2019 5th International Conference on Transportation Information and Safety (ICTIS). :903–912.
Although transport agency has employed desensitization techniques to deal with the privacy information when publicizing vehicle license plate recognition (VLPR) data, the adversaries can still eavesdrop on vehicle trajectories by certain means and further acquire the associated person and vehicle information through background knowledge. In this work, a privacy attacking method by using the desensitized VLPR data is proposed to link the vehicle trajectory. First the road average speed is evaluated by analyzing the changes of traffic flow, which is used to estimate the vehicle's travel time to the next VLPR system. Then the vehicle suspicion list is constructed through the time relevance of neighboring VLPR systems. Finally, since vehicles may have the same features like color, type, etc, the target trajectory will be located by filtering the suspected list by the rule of qualified identifier (QI) attributes and closest time method. Based on the Foshan City's VLPR data, the method is tested and results show that correct vehicle trajectory can be linked, which proves that the current VLPR data publication way has the risk of privacy disclosure. At last, the effects of related parameters on the proposed method are discussed and effective suggestions are made for publicizing VLPR date in the future.
2020-06-19
Eziama, Elvin, Ahmed, Saneeha, Ahmed, Sabbir, Awin, Faroq, Tepe, Kemal.  2019.  Detection of Adversary Nodes in Machine-To-Machine Communication Using Machine Learning Based Trust Model. 2019 IEEE International Symposium on Signal Processing and Information Technology (ISSPIT). :1—6.

Security challenges present in Machine-to-Machine Communication (M2M-C) and big data paradigm are fundamentally different from conventional network security challenges. In M2M-C paradigms, “Trust” is a vital constituent of security solutions that address security threats and for such solutions,it is important to quantify and evaluate the amount of trust in the information and its source. In this work, we focus on Machine Learning (ML) Based Trust (MLBT) evaluation model for detecting malicious activities in a vehicular Based M2M-C (VBM2M-C) network. In particular, we present an Entropy Based Feature Engineering (EBFE) coupled Extreme Gradient Boosting (XGBoost) model which is optimized with Binary Particle Swarm optimization technique. Based on three performance metrics, i.e., Accuracy Rate (AR), True Positive Rate (TPR), False Positive Rate (FPR), the effectiveness of the proposed method is evaluated in comparison to the state-of-the-art ensemble models, such as XGBoost and Random Forest. The simulation results demonstrates the superiority of the proposed model with approximately 10% improvement in accuracy, TPR and FPR, with reference to the attacker density of 30% compared with the start-of-the-art algorithms.

2020-03-12
Shamsi, Kaveh, Pan, David Z., Jin, Yier.  2019.  On the Impossibility of Approximation-Resilient Circuit Locking. 2019 IEEE International Symposium on Hardware Oriented Security and Trust (HOST). :161–170.

Logic locking, and Integrated Circuit (IC) Camouflaging, are techniques that try to hide the design of an IC from a malicious foundry or end-user by introducing ambiguity into the netlist of the circuit. While over the past decade an array of such techniques have been proposed, their security has been constantly challenged by algorithmic attacks. This may in part be due to a lack of formally defined notions of security in the first place, and hence a lack of security guarantees based on long-standing hardness assumptions. In this paper we take a formal approach. We define the problem of circuit locking (cL) as transforming an original circuit to a locked one which is ``unintelligable'' without a secret key (this can model camouflaging and split-manufacturing in addition to logic locking). We define several notions of security for cL under different adversary models. Using long standing results from computational learning theory we show the impossibility of exponentially approximation-resilient locking in the presence of an oracle for large classes of Boolean circuits. We then show how exact-recovery-resiliency and a more relaxed notion of security that we coin ``best-possible'' approximation-resiliency can be provably guaranteed with polynomial overhead. Our theoretical analysis directly results in stronger attacks and defenses which we demonstrate through experimental results on benchmark circuits.