Biblio

Filters: Author is Carvalho, Marco  [Clear All Filters]
2018-11-19
Pal, Partha, Soule, Nathaniel, Lageman, Nate, Clark, Shane S., Carvalho, Marco, Granados, Adrian, Alves, Anthony.  2017.  Adaptive Resource Management Enabling Deception (ARMED). Proceedings of the 12th International Conference on Availability, Reliability and Security. :52:1–52:8.
Distributed Denial of Service (DDoS) attacks routinely disrupt access to critical services. Mitigation of these attacks often relies on planned over-provisioning or elastic provisioning of resources, and third-party monitoring, analysis, and scrubbing of network traffic. While volumetric attacks which saturate a victim's network are most common, non-volumetric, low and slow, DDoS attacks can achieve their goals without requiring high traffic volume by targeting vulnerable network protocols or protocol implementations. Non-volumetric attacks, unlike their noisy counterparts, require more sophisticated detection mechanisms, and typically have only post-facto and targeted protocol/application mitigations. In this paper, we introduce our work under the Adaptive Resource Management Enabling Deception (ARMED) effort, which is developing a network-level approach to automatically mitigate sophisticated DDoS attacks through deception-focused adaptive maneuvering. We describe the concept, implementation, and initial evaluation of the ARMED Network Actors (ANAs) that facilitate transparent interception, sensing, analysis, and mounting of adaptive responses that can disrupt the adversary's decision process.
2017-11-01
Atighetchi, Michael, Simidchieva, Borislava, Carvalho, Marco, Last, David.  2016.  Experimentation Support for Cyber Security Evaluations. Proceedings of the 11th Annual Cyber and Information Security Research Conference. :5:1–5:7.
To improve the information assurance of mission execution over modern IT infrastructure, new cyber defenses need to not only provide security benefits, but also perform within a given cost regime. Current approaches for validating and integrating cyber defenses heavily rely on manual trial-and-error, without a clear and systematic understanding of security versus cost tradeoffs. Recent work on model-based analysis of cyber defenses has led to quantitative measures of the attack surface of a distributed system hosting mission critical applications. These metrics show great promise, but the cost of manually creating the underlying models is an impediment to their wider adoption. This paper describes an experimentation framework for automating multiple activities associated with model construction and validation, including creating ontological system models from real systems, measuring and recording distributions of resource impact and end-to-end performance overhead values, executing real attacks to validate theoretic attack vectors found through analytic reasoning, and creating and managing multi-variable experiments.