Biblio
This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.
Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.
Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.
This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.
This article pertains to cognitive security. Detecting manipulated photos, or "deepfakes," can be difficult. Deepfakes have become a major concern as their use in disinformation campaigns, social media manipulation, and propaganda grows worldwide.
This article pertains to cognitive security and human behavior. Facebook announced a recent takedown of 51 Facebook accounts, 36 Facebook pages, seven Facebook groups and three Instagram accounts that it says were all involved in coordinated "inauthentic behavior." Facebook says the activity originated geographically from Iran.
Researchers at the NYU Tandon School of Engineering developed a technique to prevent sophisticated methods of altering photos and videos to produce deep fakes, which are often weaponized to influence people. The technique developed by researchers involves the use of artificial intelligence (AI) to determine the authenticity of images and videos.
In this article, we review previous work on biometric security under a recent framework proposed in the field of adversarial machine learning. This allows us to highlight novel insights on the security of biometric systems when operating in the presence of intelligent and adaptive attackers that manipulate data to compromise normal system operation. We show how this framework enables the categorization of known and novel vulnerabilities of biometric recognition systems, along with the corresponding attacks, countermeasures, and defense mechanisms. We report two application examples, respectively showing how to fabricate a more effective face spoofing attack, and how to counter an attack that exploits an unknown vulnerability of an adaptive face-recognition system to compromise its face templates.
- « first
- ‹ previous
- 1
- 2
- 3