Biblio
Filters: Keyword is persistent engagement [Clear All Filters]
Problems of Poison: New Paradigms and "Agreed" Competition in the Era of AI-Enabled Cyber Operations. 2020 12th International Conference on Cyber Conflict (CyCon). 1300:215–232.
.
2020. Few developments seem as poised to alter the characteristics of security in the digital age as the advent of artificial intelligence (AI) technologies. For national defense establishments, the emergence of AI techniques is particularly worrisome, not least because prototype applications already exist. Cyber attacks augmented by AI portend the tailored manipulation of human vectors within the attack surface of important societal systems at great scale, as well as opportunities for calamity resulting from the secondment of technical skill from the hacker to the algorithm. Arguably most important, however, is the fact that AI-enabled cyber campaigns contain great potential for operational obfuscation and strategic misdirection. At the operational level, techniques for piggybacking onto routine activities and for adaptive evasion of security protocols add uncertainty, complicating the defensive mission particularly where adversarial learning tools are employed in offense. Strategically, AI-enabled cyber operations offer distinct attempts to persistently shape the spectrum of cyber contention may be able to pursue conflict outcomes beyond the expected scope of adversary operation. On the other, AI-augmented cyber defenses incorporated into national defense postures are likely to be vulnerable to "poisoning" attacks that predict, manipulate and subvert the functionality of defensive algorithms. This article takes on two primary tasks. First, it considers and categorizes the primary ways in which AI technologies are likely to augment offensive cyber operations, including the shape of cyber activities designed to target AI systems. Then, it frames a discussion of implications for deterrence in cyberspace by referring to the policy of persistent engagement, agreed competition and forward defense promulgated in 2018 by the United States. Here, it is argued that the centrality of cyberspace to the deployment and operation of soon-to-be-ubiquitous AI systems implies new motivations for operation within the domain, complicating numerous assumptions that underlie current approaches. In particular, AI cyber operations pose unique measurement issues for the policy regime.
Rough-and-Ready: A Policy Framework to Determine if Cyber Deterrence is Working or Failing. 2019 11th International Conference on Cyber Conflict (CyCon). 900:1–20.
.
2019. This paper addresses the recent shift in the United States' policy that emphasizes forward defense and deterrence and to “intercept and halt” adversary cyber operations. Supporters believe these actions should significantly reduce attacks against the United States, while critics worry that they may incite more adversary activity. As there is no standard methodology to measure which is the case, this paper introduces a transparent framework to better assess whether the new U.S. policy and actions are suppressing or encouraging attacks1. Determining correlation and causation will be difficult due to the hidden nature of cyber attacks, the veiled motivations of differing actors, and other factors. However even if causation may never be clear, changes in the direction and magnitude of cyber attacks can be suggestive of the success or failure of these new policies, especially as their proponents suggest they should be especially effective. Rough-and-ready metrics can be helpful to assess the impacts of policymaking, can lay the groundwork for more comprehensive measurements, and may also provide insight into academic theories of persistent engagement and deterrence.