Biblio
AI technologies have demonstrated great value to missions as diverse as space-based imagery analysis, cyberattack warning, supply chain logistics and analysis of microbiologic systems. At the same time, the failure modes of AI technologies are poorly understood. DARPA is working to address this shortfall, with focused R&D, both analytic and empirical. DARPA’s success is essential for the Department to deploy AI technologies, particularly to the tactical edge, where reliable performance is required.
Traditional cyber security techniques have led to an asymmetric disadvantage for defenders. The defender must detect all possible threats at all times from all attackers and defend all systems against all possible exploitation. In contrast, an attacker needs only to find a single path to the defender's critical information. In this article, we discuss how this asymmetry can be rebalanced using cyber deception to change the attacker's perception of the network environment, and lead attackers to false beliefs about which systems contain critical information or are critical to a defender's computing infrastructure. We introduce game theory concepts and models to represent and reason over the use of cyber deception by the defender and the effect it has on attackerperception. Finally, we discuss techniques for combining artificial intelligence algorithms with game theory models to estimate hidden states of the attacker using feedback through payoffs to learn how best to defend the system using cyber deception. It is our opinion that adaptive cyber deception is a necessary component of future information systems and networks. The techniques we present can simultaneously decrease the risks and impacts suffered by defenders and dramatically increase the costs and risks of detection for attackers. Such techniques are likely to play a pivotal role in defending national and international security concerns.
Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI's GPT2, with high accuracy.
Artificial intelligence (AI) has been used in the generation of deep fakes. However, researchers have shown that AI can be used to fight misinformation.
Researchers at the NYU Tandon School of Engineering developed a technique to prevent sophisticated methods of altering photos and videos to produce deep fakes, which are often weaponized to influence people. The technique developed by researchers involves the use of artificial intelligence (AI) to determine the authenticity of images and videos.