Visible to the public Biblio

Found 116 results

Filters: Keyword is Cognitive Security  [Clear All Filters]
2019-09-10
Peter Dizikes.  2019.  Could this be the solution to stop the spread of fake news? World Economic Forum.

This article pertains to cognitive security. False news is becoming a growing problem. During a study, it was found that a crowdsourcing approach could help detect fake news sources.

Casey Newton.  2019.  People older than 65 share the most fake news, a new study finds. The Verge.

This article pertains to cognitive security. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic -- including party affiliation.

[Anonymous].  2019.  From viruses to social bots, researchers unearth the structure of attacked networks. Science Daily.

A machine learning model of the protein interaction network has been developed by researchers to explore how viruses operate. This research can be applied to different types of attacks and network models across different fields, including network security. The capacity to determine how trolls and bots influence users on social media platforms has also been explored through this research.

Gregory Barber.  2019.  Deepfakes Are Getting Better, But They're Still Easy to Spot. Wired.

This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.

2019-09-09
Veksler, Vladislav D, Buchler, Norbou, Hoffman, Blaine E, Cassenti, Daniel N, Sample, Char, Sugrim, Shridat.  2018.  Simulations in Cyber-Security: A Review of Cognitive Modeling of Network Attackers, Defenders, and Users. Frontiers in psychology. 9:691.

Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.

Edward A. Cranford, Christian Lebiere, Cleotilde Gonzalez, Sarah Cooney, Phebe Vayanos, Milind Tambe.  2018.  Learning about Cyber Deception through Simulations: Predictions of Human Decision Making with Deceptive Signals in Stackelberg Security Games. CogSci.

To improve cyber defense, researchers have developed algorithms to allocate limited defense resources optimally. Through signaling theory, we have learned that it is possible to trick the human mind when using deceptive signals. The present work is an initial step towards developing a psychological theory of cyber deception. We use simulations to investigate how humans might make decisions under various conditions of deceptive signals in cyber-attack scenarios. We created an Instance-Based Learning (IBL) model of the attacker decisions using the ACT-R cognitive architecture. We ran simulations against the optimal deceptive signaling algorithm and against four alternative deceptive signal schemes. Our results show that the optimal deceptive algorithm is more effective at reducing the probability of attack and protecting assets compared to other signaling conditions, but it is not perfect. These results shed some light on the expected effectiveness of deceptive signals for defense. The implications of these findings are discussed. 

Gutzwiller, Robert, Ferguson-Walter, Kimberly, Fugate, Sunny, Rogers, Andrew.  2018.  “Oh, look, a butterfly!" A framework for distracting attackers to improve cyber defense..

Inverting human factors can aid in cyber defense by flipping well-known guidelines and using them to degrade and disrupt the performance of a cyber attacker. There has been significant research on how we perform cyber defense tasks and how we should present information to operators, cyber defenders, and analysts to make them more efficient and more effective. We can actually create these situations just as easily as we can mitigate them. Oppositional human factors are a new way to apply well-known research on human attention allocation to disrupt potential cyber attackers and provide much-needed asymmetric benefits to the defender.

C. Wang, Z. Lu.  2018.  Cyber Deception: Overview and the Road Ahead. IEEE Security Privacy. 16:80-85.

Since the concept of deception for cybersecurity was introduced decades ago, several primitive systems, such as honeypots, have been attempted. More recently, research on adaptive cyber defense techniques has gained momentum. The new research interests in this area motivate us to provide a high-level overview of cyber deception. We analyze potential strategies of cyber deception and its unique aspects. We discuss the research challenges of creating effective cyber deception-based techniques and identify future research directions.

E. Peterson.  2016.  Dagger: Modeling and visualization for mission impact situation awareness. MILCOM 2016 - 2016 IEEE Military Communications Conference. :25-30.

Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.

2019-09-06
Gregory Barber.  2019.  Deepfakes Are Getting Better, But They're Still Easy to Spot. Wired.

This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.

Lily Hay Newman.  2019.  To Fight Deepfakes, Researchers Built a Smarter Camera. Wired.

This article pertains to cognitive security. Detecting manipulated photos, or "deepfakes," can be difficult. Deepfakes have become a major concern as their use in disinformation campaigns, social media manipulation, and propaganda grows worldwide.

Lily Hay Newman.  2019.  Facebook Removes a Fresh Batch of Innovative, Iran-Linked Fake Accounts. Wired.

This article pertains to cognitive security and human behavior. Facebook announced a recent takedown of 51 Facebook accounts, 36 Facebook pages, seven Facebook groups and three Instagram accounts that it says were all involved in coordinated "inauthentic behavior." Facebook says the activity originated geographically from Iran.

Pawel Korus, Nasir Memon.  2019.  Outsmarting deep fakes: AI-driven imaging system protects authenticity. Science Daily.

Researchers at the NYU Tandon School of Engineering developed a technique to prevent sophisticated methods of altering photos and videos to produce deep fakes, which are often weaponized to influence people. The technique developed by researchers involves the use of artificial intelligence (AI) to determine the authenticity of images and videos.

2018-08-06
B. Biggio, g. fumera, P. Russu, L. Didaci, F. Roli.  2015.  Adversarial Biometric Recognition : A review on biometric system security from the adversarial machine-learning perspective. IEEE Signal Processing Magazine. 32:31-41.

In this article, we review previous work on biometric security under a recent framework proposed in the field of adversarial machine learning. This allows us to highlight novel insights on the security of biometric systems when operating in the presence of intelligent and adaptive attackers that manipulate data to compromise normal system operation. We show how this framework enables the categorization of known and novel vulnerabilities of biometric recognition systems, along with the corresponding attacks, countermeasures, and defense mechanisms. We report two application examples, respectively showing how to fabricate a more effective face spoofing attack, and how to counter an attack that exploits an unknown vulnerability of an adaptive face-recognition system to compromise its face templates.

Y. Cao, J. Yang.  2015.  Towards Making Systems Forget with Machine Unlearning. 2015 IEEE Symposium on Security and Privacy. :463-480.
Today's systems produce a rapidly exploding amount of data, and the data further derives more data, forming a complex data propagation network that we call the data's lineage. There are many reasons that users want systems to forget certain data including its lineage. From a privacy perspective, users who become concerned with new privacy risks of a system often want the system to forget their data and lineage. From a security perspective, if an attacker pollutes an anomaly detector by injecting manually crafted data into the training data set, the detector must forget the injected data to regain security. From a usability perspective, a user can remove noise and incorrect entries so that a recommendation engine gives useful recommendations. Therefore, we envision forgetting systems, capable of forgetting certain data and their lineages, completely and quickly. This paper focuses on making learning systems forget, the process of which we call machine unlearning, or simply unlearning. We present a general, efficient unlearning approach by transforming learning algorithms used by a system into a summation form. To forget a training data sample, our approach simply updates a small number of summations – asymptotically faster than retraining from scratch. Our approach is general, because the summation form is from the statistical query learning in which many machine learning algorithms can be implemented. Our approach also applies to all stages of machine learning, including feature selection and modeling. Our evaluation, on four diverse learning systems and real-world workloads, shows that our approach is general, effective, fast, and easy to use.
L. Chen, Y. Ye, T. Bourlai.  2017.  Adversarial Machine Learning in Malware Detection: Arms Race between Evasion Attack and Defense. 2017 European Intelligence and Security Informatics Conference (EISIC). :99-106.
Since malware has caused serious damages and evolving threats to computer and Internet users, its detection is of great interest to both anti-malware industry and researchers. In recent years, machine learning-based systems have been successfully deployed in malware detection, in which different kinds of classifiers are built based on the training samples using different feature representations. Unfortunately, as classifiers become more widely deployed, the incentive for defeating them increases. In this paper, we explore the adversarial machine learning in malware detection. In particular, on the basis of a learning-based classifier with the input of Windows Application Programming Interface (API) calls extracted from the Portable Executable (PE) files, we present an effective evasion attack model (named EvnAttack) by considering different contributions of the features to the classification problem. To be resilient against the evasion attack, we further propose a secure-learning paradigm for malware detection (named SecDefender), which not only adopts classifier retraining technique but also introduces the security regularization term which considers the evasion cost of feature manipulations by attackers to enhance the system security. Comprehensive experimental results on the real sample collections from Comodo Cloud Security Center demonstrate the effectiveness of our proposed methods.