Visible to the public Biblio

Filters: Author is Atighetchi, Michael  [Clear All Filters]
2018-06-11
Atighetchi, Michael, Yaman, Fusun, Last, David, Paltzer, Captain Nicholas, Caiazzo, Meghan, Raio, Stephen.  2017.  A Flexible Approach Towards Security Validation. Proceedings of the 2017 Workshop on Automated Decision Making for Active Cyber Defense. :7–13.
Validating security properties of complex distributed systems is a challenging problem by itself, let alone when the work needs to be performed under tight budget and time constraints on prototype systems with components at various maturity levels. This paper described a tailored approach to security evaluations involving a strategic combination of model-based quantification, emulation, and logical argumentation. By customizing the evaluation to fit existing budget and timelines, validators can achieve the most appropriate validation process, trading off fidelity with coverage across a number of different defense components and different maturity levels. We successfully applied this process to the validation of an overlay proxy network, analyzing the impact of five different defense attributes (together with combinations thereof) on access path establishment and anonymity.
2017-11-01
Atighetchi, Michael, Simidchieva, Borislava, Carvalho, Marco, Last, David.  2016.  Experimentation Support for Cyber Security Evaluations. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :5:1–5:7.
To improve the information assurance of mission execution over modern IT infrastructure, new cyber defenses need to not only provide security benefits, but also perform within a given cost regime. Current approaches for validating and integrating cyber defenses heavily rely on manual trial-and-error, without a clear and systematic understanding of security versus cost tradeoffs. Recent work on model-based analysis of cyber defenses has led to quantitative measures of the attack surface of a distributed system hosting mission critical applications. These metrics show great promise, but the cost of manually creating the underlying models is an impediment to their wider adoption. This paper describes an experimentation framework for automating multiple activities associated with model construction and validation, including creating ontological system models from real systems, measuring and recording distributions of resource impact and end-to-end performance overhead values, executing real attacks to validate theoretic attack vectors found through analytic reasoning, and creating and managing multi-variable experiments.