Visible to the public Biblio

Filters: Keyword is Articles of Interest  [Clear All Filters]
2019-09-10
Gregory Barber.  2019.  Deepfakes Are Getting Better, But They're Still Easy to Spot. Wired.

This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.

2019-09-09
Veksler, Vladislav D, Buchler, Norbou, Hoffman, Blaine E, Cassenti, Daniel N, Sample, Char, Sugrim, Shridat.  2018.  Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users. Frontiers in psychology. 9:691.

Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.

E. Peterson.  2016.  Dagger: Modeling and visualization for mission impact situation awareness. MILCOM 2016 - 2016 IEEE Military Communications Conference. :25-30.

Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.

2019-09-06
Gregory Barber.  2019.  Deepfakes Are Getting Better, But They're Still Easy to Spot. Wired.

This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.

Lily Hay Newman.  2019.  To Fight Deepfakes, Researchers Built a Smarter Camera. Wired.

This article pertains to cognitive security. Detecting manipulated photos, or "deepfakes," can be difficult. Deepfakes have become a major concern as their use in disinformation campaigns, social media manipulation, and propaganda grows worldwide.

Lily Hay Newman.  2019.  Facebook Removes a Fresh Batch of Innovative, Iran-Linked Fake Accounts. Wired.

This article pertains to cognitive security and human behavior. Facebook announced a recent takedown of 51 Facebook accounts, 36 Facebook pages, seven Facebook groups and three Instagram accounts that it says were all involved in coordinated "inauthentic behavior." Facebook says the activity originated geographically from Iran.

Pawel Korus, Nasir Memon.  2019.  Outsmarting deep fakes: AI-driven imaging system protects authenticity. Science Daily.

Researchers at the NYU Tandon School of Engineering developed a technique to prevent sophisticated methods of altering photos and videos to produce deep fakes, which are often weaponized to influence people. The technique developed by researchers involves the use of artificial intelligence (AI) to determine the authenticity of images and videos.

2018-08-06
B. Biggio, g. fumera, P. Russu, L. Didaci, F. Roli.  2015.  Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective. IEEE Signal Processing Magazine. 32:31-41.

In this article, we review previous work on biometric security under a recent framework proposed in the field of adversarial machine learning. This allows us to highlight novel insights on the security of biometric systems when operating in the presence of intelligent and adaptive attackers that manipulate data to compromise normal system operation. We show how this framework enables the categorization of known and novel vulnerabilities of biometric recognition systems, along with the corresponding attacks, countermeasures, and defense mechanisms. We report two application examples, respectively showing how to fabricate a more effective face spoofing attack, and how to counter an attack that exploits an unknown vulnerability of an adaptive face-recognition system to compromise its face templates.

Y. Cao, J. Yang.  2015.  Towards Making Systems Forget with Machine Unlearning. 2015 IEEE Symposium on Security and Privacy. :463-480.
Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations – asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.
L. Chen, Y. Ye, T. Bourlai.  2017.  Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense. 2017 European Intelligence and Security Informatics Conference (EISIC). :99-106.
Since malware has caused serious damages and evolving threats to computer and Internet users, its detection is of great interest to both anti-malware industry and researchers. In recent years, machine learning-based systems have been successfully deployed in malware detection, in which different kinds of classifiers are built based on the training samples using different feature representations. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the adversarial machine learning in malware detection. In particular, on the basis of a learning-based classifier with the input of Windows Application Programming Interface (API) calls extracted from the Portable Executable (PE) files, we present an effective evasion attack model (named EvnAttack) by considering different contributions of the features to the classification problem. To be resilient against the evasion attack, we further propose a secure-learning paradigm for malware detection (named SecDefender), which not only adopts classifier retraining technique but also introduces the security regularization term which considers the evasion cost of feature manipulations by attackers to enhance the system security. Comprehensive experimental results on the real sample collections from Comodo Cloud Security Center demonstrate the effectiveness of our proposed methods.