Biblio
Inference of unknown opinions with uncertain, adversarial (e.g., incorrect or conflicting) evidence in large datasets is not a trivial task. Without proper handling, it can easily mislead decision making in data mining tasks. In this work, we propose a highly scalable opinion inference probabilistic model, namely Adversarial Collective Opinion Inference (Adv-COI), which provides a solution to infer unknown opinions with high scalability and robustness under the presence of uncertain, adversarial evidence by enhancing Collective Subjective Logic (CSL) which is developed by combining SL and Probabilistic Soft Logic (PSL). The key idea behind the Adv-COI is to learn a model of robust ways against uncertain, adversarial evidence which is formulated as a min-max problem. We validate the out-performance of the Adv-COI compared to baseline models and its competitive counterparts under possible adversarial attacks on the logic-rule based structured data and white and black box adversarial attacks under both clean and perturbed semi-synthetic and real-world datasets in three real world applications. The results show that the Adv-COI generates the lowest mean absolute error in the expected truth probability while producing the lowest running time among all.
High-end vehicles incorporate about one hundred computers; physical and virtualized ones; self-driving vehicles even more. This allows a plethora of attack combinations. This paper demonstrates how to assess exploitability risks of vehicular on-board networks via automatically generated and analyzed attack graphs. Our stochastic model and algorithm combine all possible attack vectors and consider attacker resources more efficiently than Bayesian networks. We designed and implemented an algorithm that assesses a compilation of real vehicle development documents within only two CPU minutes, using an average of about 100 MB RAM. Our proof of concept "Security Analyzer for Exploitability Risks" (SAlfER) is 200 to 5 000 times faster and 40 to 200 times more memory-efficient than an implementation with UnBBayes1. Our approach aids vehicle development by automatically re-checking the architecture for attack combinations that may have been enabled by mistake and which are not trivial to spot by the human developer. Our approach is intended for and relevant for industrial application. Our research is part of a collaboration with a globally operating automotive manufacturer and is aimed at supporting the security of autonomous, connected, electrified, and shared vehicles.
This paper offers a new approach to modelling the effect of cyber-attacks on reliability of software used in industrial control applications. The model is based on the view that successful cyber-attacks introduce failure regions, which are not present in non-compromised software. The model is then extended to cover a fault tolerant architecture, such as the 1-out-of-2 software, popular for building industrial protection systems. The model is used to study the effectiveness of software maintenance policies such as patching and "cleansing" ("proactive recovery") under different adversary models ranging from independent attacks to sophisticated synchronized attacks on the channels. We demonstrate that the effect of attacks on reliability of diverse software significantly depends on the adversary model. Under synchronized attacks system reliability may be more than an order of magnitude worse than under independent attacks on the channels. These findings, although not surprising, highlight the importance of using an adequate adversary model in the assessment of how effective various cyber-security controls are.