Biblio

Filters: Author is Simon Rehwald  [Clear All Filters]
2020-10-12
Amjad Ibrahim, Simon Rehwald, Antoine Scemama, Florian Andres, Alexander Pretschner.  2020.  Causal Model Extraction from Attack Trees to Attribute Malicious Insiders Attacks. The Seventh International Workshop on Graphical Models for Security.

In the context of insiders, preventive security measures have a high likelihood of failing because insiders ought to have sufficient privileges to perform their jobs. Instead, in this paper, we propose to treat the insider threat by a detective measure that holds an insider accountable in case of violations. However, to enable accountability, we need to create causal models that support reasoning about the causality of a violation. Current security models (e.g., attack trees) do not allow that. Still, they are a useful source for creating causal models. In this paper, we discuss the value added by causal models in the security context. Then, we capture the interaction between attack trees and causal models by proposing an automated approach to extract the latter from the former. Our approach considers insider-specific attack classes such as collusion attacks and causal-model-specific properties like preemption relations. We present an evaluation of the resulting causal models’ validity and effectiveness, in addition to the efficiency of the extraction process.
 

2019-08-21
Amjad Ibrahim, Simon Rehwald, Alexander Pretschner.  2019.  Efficiently Checking Actual Causality with SAT Solving. Lecture Notes of the 2018 Marktoberdorf Summer school on Software Engineering. To Appear..

Recent formal approaches towards causality have made the concept ready for incorporation into the technical world. However, causality reasoning is computationally hard; and no general algorithmic approach exists that efficiently infers the causes for effects. Thus, checking causality in the context of complex, multi-agent, and distributed socio-technical systems is a significant challenge. Therefore, we conceptualize an intelligent and novel algorithmic approach towards checking causality in acyclic causal models with binary variables, utilizing the optimization power in the solvers of the Boolean Satisfiability Problem (SAT). We present two SAT encodings, and an empirical evaluation of their efficiency and scalability. We show that causality is computed efficiently in less than 5 seconds for models that consist of more than 4000 variables.