Visible to the public Biblio

Filters: Keyword is artificial intelligence  [Clear All Filters]
2021-10-26
[Anonymous].  2021.  AI Next Campaign.

AI technologies have demonstrated great value to missions as diverse as space-based imagery analysis, cyberattack warning, supply chain logistics and analysis of microbiologic systems. At the same time, the failure modes of AI technologies are poorly understood. DARPA is working to address this shortfall, with focused R&D, both analytic and empirical. DARPA’s success is essential for the Department to deploy AI technologies, particularly to the tactical edge, where reliable performance is required.

2019-09-24
Federico Pistono, Roman V. Yampolskiy.  2016.  Unethical Research: How to Create a Malevolent Artificial Intelligence.

Cybersecurity research involves publishing papers about malicious exploits as much as publishing information on how to design tools to protect cyber-infrastructure. It is this information exchange between ethical hackers and security experts, which results in a well-balanced cyber-ecosystem. In the blooming domain of AI Safety Engineering, hundreds of papers have been published on different proposals geared at the creation of a safe machine, yet nothing, to our knowledge, has been published on how to design a malevolent machine. Availability of such information would be of great value particularly to computer scientists, mathematicians, and others who have an interest in AI safety, and who are attempting to avoid the spontaneous emergence or the deliberate creation of a dangerous AI, which can negatively affect human activities and in the worst case cause the complete obliteration of the human species. This paper provides some general guidelines for the creation of a Malevolent Artificial Intelligence (MAI).

2019-09-20
Sunny Fugate, Kimberly Ferguson-Walter.  2019.  Artificial Intelligence and Game Theory Models for Defending Critical Networks with Cyber Deception. AI Magazine. 40(1):49-62.

Traditional cyber security techniques have led to an asymmetric disadvantage for defenders. The defender must detect all possible threats at all times from all attackers and defend all systems against all possible exploitation. In contrast, an attacker needs only to find a single path to the defender's critical information. In this article, we discuss how this asymmetry can be rebalanced using cyber deception to change the attacker's perception of the network environment, and lead attackers to false beliefs about which systems contain critical information or are critical to a defender's computing infrastructure. We introduce game theory concepts and models to represent and reason over the use of cyber deception by the defender and the effect it has on attackerperception. Finally, we discuss techniques for combining artificial intelligence algorithms with game theory models to estimate hidden states of the attacker using feedback through payoffs to learn how best to defend the system using cyber deception. It is our opinion that adaptive cyber deception is a necessary component of future information systems and networks. The techniques we present can simultaneously decrease the risks and impacts suffered by defenders and dramatically increase the costs and risks of detection for attackers. Such techniques are likely to play a pivotal role in defending national and international security concerns.

2019-09-12
Patricia L. McDermott, Cynthia O. Dominguez, Nicholas Kasdaglis, Matthew H. Ryan, Isabel M. Trahan, Alexander Nelson.  2018.  Human-Machine Teaming Systems Engineering Guide.

With the explosion of Automation, Autonomy, and AI technology development today, amid encouragement to put humans at the center of AI, systems engineers and user story/requirements developers need research-based guidance on how to design for human machine teaming (HMT). Insights from more than two decades of human-automation interaction research, applied in the systems engineering process, provide building blocks for designing automation, autonomy, and AI-based systems that are effective teammates for people.

The HMT Systems Engineering Guide provides this guidance based on a 2016-17 literature search and analysis of applied research. The guide provides a framework organizing HMT research, along with methodology for engaging with users of a system to elicit user stories and/or requirements that reflect applied research findings. The framework uses organizing themes of Observability, Predictability, Directing Attention, Exploring the Solution Space, Directability, Adaptability, Common Ground, Calibrated Trust, Design Process, and Information Presentation.

The guide includes practice-oriented resources that can be used to bridge the gap between research and design, including a tailorable HMT Knowledge Audit interview methodology, step-by-step instructions for planning and conducting data collection sessions, and a set of general cognitive interface requirements that can be adapted to specific applications based upon domain-specific data collected. 

2019-09-11
Devin Coldewey.  2019.  To Detect Fake News, This AI First Learned to Write it. Tech Crunch.

Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI's GPT2, with high accuracy.

Clint Watts.  2019.  The National Security Challenges of Artificial Intelligence, Manipulated Media, and 'Deepfakes'. Foreign Policy Research Institute.

The spread of Deepfakes via social media platforms leads to disinformation and misinformation. There are ways in which the government and social media companies can prevent to prevent Deepfakes.

2019-09-10
[Anonymous].  2019.  Can AI help to end fake news? Horizon Magazine.

Artificial intelligence (AI) has been used in the generation of deep fakes. However, researchers have shown that AI can be used to fight misinformation.