Biblio
Experts often design security and privacy technology with specific use cases and threat models in mind. In practice however, end users are not aware of these threats and potential countermeasures. Furthermore, mis-conceptions about the benefits and limitations of security and privacy technology inhibit large-scale adoption by end users. In this paper, we address this challenge and contribute a qualitative study on end users' and security experts' perceptions of threat models and potential countermeasures. We follow an inductive research approach to explore perceptions and mental models of both security experts and end users. We conducted semi-structured interviews with 8 security experts and 13 end users. Our results suggest that in contrast to security experts, end users neglect acquaintances and friends as attackers in their threat models. Our findings highlight that experts value technical countermeasures whereas end users try to implement trust-based defensive methods.
Cybersecurity community is slowly leveraging Machine Learning (ML) to combat ever evolving threats. One of the biggest drivers for successful adoption of these models is how well domain experts and users are able to understand and trust their functionality. As these black-box models are being employed to make important predictions, the demand for transparency and explainability is increasing from the stakeholders.Explanations supporting the output of ML models are crucial in cyber security, where experts require far more information from the model than a simple binary output for their analysis. Recent approaches in the literature have focused on three different areas: (a) creating and improving explainability methods which help users better understand the internal workings of ML models and their outputs; (b) attacks on interpreters in white box setting; (c) defining the exact properties and metrics of the explanations generated by models. However, they have not covered, the security properties and threat models relevant to cybersecurity domain, and attacks on explainable models in black box settings.In this paper, we bridge this gap by proposing a taxonomy for Explainable Artificial Intelligence (XAI) methods, covering various security properties and threat models relevant to cyber security domain. We design a novel black box attack for analyzing the consistency, correctness and confidence security properties of gradient based XAI methods. We validate our proposed system on 3 security-relevant data-sets and models, and demonstrate that the method achieves attacker's goal of misleading both the classifier and explanation report and, only explainability method without affecting the classifier output. Our evaluation of the proposed approach shows promising results and can help in designing secure and robust XAI methods.
The dependability of Cyber Physical Systems (CPS) solely lies in the secure and reliable functionality of their backbone, the computing platform. Security of this platform is not only threatened by the vulnerabilities in the software peripherals, but also by the vulnerabilities in the hardware internals. Such threats can arise from malicious modifications to the integrated circuits (IC) based computing hardware, which can disable the system, leak information or produce malfunctions. Such modifications to computing hardware are made possible by the globalization of the IC industry, where a computing chip can be manufactured anywhere in the world. In the complex computing environment of CPS such modifications can be stealthier and undetectable. Under such circumstances, design of these malicious modifications, and eventually their detection, will be tied to the functionality and operation of the CPS. So it is imperative to address such threats by incorporating security awareness in the computing hardware design in a comprehensive manner taking the entire system into consideration. In this paper, we present a study in the influence of hardware Trojans on closed-loop systems, which form the basis of CPS, and establish threat models. Using these models, we perform a case study on a critical CPS application, gas pipeline based SCADA system. Through this process, we establish a completely virtual simulation platform along with a hardware-in-the-loop based simulation platform for implementation and testing.
Cloud storage brokerage systems abstract cloud storage complexities by mediating technical and business relationships between cloud stakeholders, while providing value-added services. This however raises security challenges pertaining to the integration of disparate components with sometimes conflicting security policies and architectural complexities. Assessing the security risks of these challenges is therefore important for Cloud Storage Brokers (CSBs). In this paper, we present a threat modeling schema to analyze and identify threats and risks in cloud brokerage brokerage systems. Our threat modeling schema works by generating attack trees, attack graphs, and data flow diagrams that represent the interconnections between identified security risks. Our proof-of-concept implementation employs the Common Configuration Scoring System (CCSS) to support the threat modeling schema, since current schemes lack sufficient security metrics which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two attacks commonly launched against cloud storage systems: Cloud sStorage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then combined with CVSS based metrics to assign probabilities in an Attack Tree. Thus, we show the possibility combining CVSS and CCSS for comprehensive threat modeling, and also show that our schemas can be used to improve cloud security.
A lot of research in security of cyber physical systems focus on threat models where an attacker can spoof sensor readings by compromising the communication channel. A little focus is given to attacks on physical components. In this paper a method to detect potential attacks on physical components in a Cyber Physical System (CPS) is proposed. Physical attacks are detected through a comparison of noise pattern from sensor measurements to a reference noise pattern. If an adversary has physically modified or replaced a sensor, the proposed method issues an alert indicating that a sensor is probably compromised or is defective. A reference noise pattern is established from the sensor data using a deterministic model. This pattern is referred to as a fingerprint of the corresponding sensor. The fingerprint so derived is used as a reference to identify measured data during the operation of a CPS. Extensive experimentation with ultrasonic level sensors in a realistic water treatment testbed point to the effectiveness of the proposed fingerprinting method in detecting physical attacks.