Biblio
Artificial intelligence (AI) has been used in the generation of deep fakes. However, researchers have shown that AI can be used to fight misinformation.
Experts at Ohio State University suggest that policymakers and diplomats further explore the psychological aspects associated with disinformation campaigns in order to stop the spread of false information on social media platforms by countries. More focus needs to be placed on why people fall for "fake news".
This article pertains to cognitive security. Twitter accounts deployed by Russia's troll factory in 2016 didn't only spread disinformation meant to influence the U.S. presidential election. A small handful tried making money too.
Social media echo chambers in which beliefs are significantly amplified and opposing views are easily blocked can have real-life consequences. Communication between groups should still take place despite differences in views. Blame has been placed on those who seek to profit off ignorance and fear for the growth of echo chambers in relation to the anti-vaccination movement.
An MIT study suggests the use of crowdsourcing to devalue false news stories and misinformation online. Despite differences in political opinions, all groups can agree that fake and hyperpartisan sites are untrustworthy.
There is expected to be a surge in deepfakes during the 2020 presidential campaigns. According to experts, little has been done to prepare for fake videos in which candidates are depicted unfavorably in order to sway public perception.
This article pertains to cognitive security and human behavior. Social media users with ties to Iran are shifting their disinformation efforts by imitating real people, including U.S. congressional candidates.
A U.K.-based global citizen activist organization, called Avaaz, conducted an investigation, which revealed the spread of disinformation within Europe via Facebook ahead of EU elections. According to Avaaz, these pages were found to be posting false and misleading content. These disinformation networks are considered to be weapons as they are significant in size and complexity.
Facebook's response to a altered video of Nancy Pelosi has sparked a debate as to whether social media platforms should take down videos that are considered to be "fake". The definition of "fake" is also discussed.
This article pertains to cognitive security. Microsoft Edge mobile browser will use software called NewsGuard, which will rate sites based on a variety of criteria including their use of deceptive headlines, whether they repeatedly publish false content, and transparency regarding ownership and financing.
This article pertains to cognitive security. To help sort fake news from truth, programmers are building automated systems that judge the veracity of online stories.
This article pertains to cognitive security. False news is becoming a growing problem. During a study, it was found that a crowdsourcing approach could help detect fake news sources.
This article pertains to cognitive security. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic -- including party affiliation.
A machine learning model of the protein interaction network has been developed by researchers to explore how viruses operate. This research can be applied to different types of attacks and network models across different fields, including network security. The capacity to determine how trolls and bots influence users on social media platforms has also been explored through this research.
This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.
Computational models of cognitive processes may be employed in cyber-security tools, experiments, and simulations to address human agency and effective decision-making in keeping computational networks secure. Cognitive modeling can addresses multi-disciplinary cyber-security challenges requiring cross-cutting approaches over the human and computational sciences such as the following: (a) adversarial reasoning and behavioral game theory to predict attacker subjective utilities and decision likelihood distributions, (b) human factors of cyber tools to address human system integration challenges, estimation of defender cognitive states, and opportunities for automation, (c) dynamic simulations involving attacker, defender, and user models to enhance studies of cyber epidemiology and cyber hygiene, and (d) training effectiveness research and training scenarios to address human cyber-security performance, maturation of cyber-security skill sets, and effective decision-making. Models may be initially constructed at the group-level based on mean tendencies of each subject's subgroup, based on known statistics such as specific skill proficiencies, demographic characteristics, and cultural factors. For more precise and accurate predictions, cognitive models may be fine-tuned to each individual attacker, defender, or user profile, and updated over time (based on recorded behavior) via techniques such as model tracing and dynamic parameter fitting.
To improve cyber defense, researchers have developed algorithms to allocate limited defense resources optimally. Through signaling theory, we have learned that it is possible to trick the human mind when using deceptive signals. The present work is an initial step towards developing a psychological theory of cyber deception. We use simulations to investigate how humans might make decisions under various conditions of deceptive signals in cyber-attack scenarios. We created an Instance-Based Learning (IBL) model of the attacker decisions using the ACT-R cognitive architecture. We ran simulations against the optimal deceptive signaling algorithm and against four alternative deceptive signal schemes. Our results show that the optimal deceptive algorithm is more effective at reducing the probability of attack and protecting assets compared to other signaling conditions, but it is not perfect. These results shed some light on the expected effectiveness of deceptive signals for defense. The implications of these findings are discussed.
Inverting human factors can aid in cyber defense by flipping well-known guidelines and using them to degrade and disrupt the performance of a cyber attacker. There has been significant research on how we perform cyber defense tasks and how we should present information to operators, cyber defenders, and analysts to make them more efficient and more effective. We can actually create these situations just as easily as we can mitigate them. Oppositional human factors are a new way to apply well-known research on human attention allocation to disrupt potential cyber attackers and provide much-needed asymmetric benefits to the defender.
Since the concept of deception for cybersecurity was introduced decades ago, several primitive systems, such as honeypots, have been attempted. More recently, research on adaptive cyber defense techniques has gained momentum. The new research interests in this area motivate us to provide a high-level overview of cyber deception. We analyze potential strategies of cyber deception and its unique aspects. We discuss the research challenges of creating effective cyber deception-based techniques and identify future research directions.
Technology’s role in the fight against malicious cyber-attacks is critical to the increasingly networked world of today. Yet, technology does not exist in isolation: the human factor is an aspect of cyber-defense operations with increasingly recognized importance. Thus, the human factors community has a unique responsibility to help create and validate cyber defense systems according to basic principles and design philosophy. Concurrently, the collective science must advance. These goals are not mutually exclusive pursuits: therefore, toward both these ends, this research provides cyber-cognitive links between cyber defense challenges and major human factors and ergonomics (HFE) research areas that offer solutions and instructive paths forward. In each area, there exist cyber research opportunities and realms of core HFE science for exploration. We raise the cyber defense domain up to the HFE community at-large as a sprawling area for scientific discovery and contribution.
Alex Endert's dissertation "Semantic Interaction for Visual Analytics: Inferring Analytical Reasoning for Model Steering" described semantic interaction, a user interaction methodology for visual analytics (VA). It showed that user interaction embodies users' analytic process and can thus be mapped to model-steering functionality for "human-in-the-loop" system design. The dissertation contributed a framework (or pipeline) that describes such a process, a prototype VA system to test semantic interaction, and a user evaluation to demonstrate semantic interaction's impact on the analytic process. This research is influencing current VA research and has implications for future VA research.
Dagger is a modeling and visualization framework that addresses the challenge of representing knowledge and information for decision-makers, enabling them to better comprehend the operational context of network security data. It allows users to answer critical questions such as “Given that I care about mission X, is there any reason I should be worried about what is going on in cyberspace?” or “If this system fails, will I still be able to accomplish my mission?”.
We propose 10 challenges for making automation components into effective "team players" when they interact with people in significant ways. Our analysis is based on some of the principles of human-centered computing that we have developed individually and jointly over the years, and is adapted from a more comprehensive examination of common ground and coordination.
Coactive Design is a new approach to address the increasingly sophisticated roles that people and robots play as the use of robots expands into new, complex domains. The approach is motivated by the desire for robots to perform less like teleoperated tools or independent automatons and more like interdependent teammates. In this article, we describe what it means to be interdependent, why this is important, and the design implications that follow from this perspective. We argue for a human-robot system model that supports interdependence through careful attention to requirements for observability, predictability, and directability. We present a Coactive Design method and show how it can be a useful approach for developers trying to understand how to translate high-level teamwork concepts into reusable control algorithms, interface elements, and behaviors that enable robots to fulfill their envisioned role as teammates. As an example of the coactive design approach, we present our results from the DARPA Virtual Robotics Challenge, a competition designed to spur development of advanced robots that can assist humans in recovering from natural and man-made disasters. Twenty-six teams from eight countries competed in three different tasks providing an excellent evaluation of the relative effectiveness of different approaches to human-machine system design.
This article pertains to cognitive security. There are deep concerns about the growing ability to create deepfakes. There is also deep concern about the malicious use of deepfakes to change the opinions of how people see a public figure.