Biblio
As infamous hacker Kevin Mitnick describes in his book The Art of Deception, "the human factor is truly security's weakest link". Deception has been widely successful when used by hackers for social engineering and by military strategists in kinetic warfare [26]. Deception affects the human's beliefs, decisions, and behaviors. Similarly, as cyber defenders, deception is a powerful tool that should be employed to protect our systems against humans who wish to penetrate, attack, and harm them.
Defender-attacker Stackelberg security games (SSGs) have been applied for solving many real-world security problems. Recent work in SSGs has incorporated a deceptive signaling scheme into the SSG model, where the defender strategically reveals information about her defensive strategy to the attacker, in order to influence the attacker’s decision making for the defender’s own benefit. In this work, we study the problem of signaling in security games against a boundedly rational attacker.
Most efforts to improve cyber security focus primarily on incorporating new technological approaches in products and processes. However, a key element of improvement involves acknowledging the importance of human behavior when designing, building and using cyber security technology. In this survey paper, we describe why incorporating an understanding of human behavior into cyber security products and processes can lead to more effective technology. We present two examples: the first demonstrates how leveraging behavioral science leads to clear improvements, and the other illustrates how behavioral science offers the potential for significant increases in the effectiveness of cyber security. Based on feedback collected from practitioners in preliminary interviews, we narrow our focus to two important behavioral aspects: cognitive load and bias. Next, we identify proven and potential behavioral science findings that have cyber security relevance, not only related to cognitive load and bias but also to heuristics and behavioral science models. We conclude by suggesting several next steps for incorporating behavioral science findings in our technological design, development and use.
In the wake of the recent shootings in El Paso, TX, and Dayton, OH, the Cybersecurity and Infrastructure Security Agency (CISA) advises users to watch out for possible malicious cyber activity seeking to capitalize on these tragic events. Users should exercise caution in handling emails related to the shootings, even if they appear to originate from trusted sources. It is common for hackers to try to capitalize on horrible events that occur to perform phishing attacks.
Social engineering, the act of manipulating someone into a specific action through online deception. According to Norton, social engineering attempts typically take one of several forms, including phishing, impersonation and various types of baiting. Social Engineering attacks are on the rise, according to the FBI, which reportedly received some 20,373 complaints in 2018 alone. Those complaints amount to $1.2 billion in overall losses.
Google handles more than 90% of the world's online search queries, generating billions in advertising revenue, yet it has emerged that ad-supported Google Maps includes an estimated 11 million falsely listed businesses on any given day.
A malware attack that disrupted the opening ceremony of the 2018 Winter Olympics highlights false flag operations. The malware called the "Olympic Destroyer" contained code deriving from other well-known attacks launched by different hacking groups. This lead different cybersecurity companies to accuse Russia, North Korea, Iran, or China.
According to a report released by Menlo Security, the padlock in a browser's URL bar gives users a false sense of security as cloud hosting services are being used by attackers to host malware droppers. The use of this tactic allows attackers to hide the origin of their attacks and further evade detection. The exploitation of trust is a major component of such attacks.
The curriculum includes "Don't Fall for Fake" activities that are centered around teaching children critical thinking skills. This is so they'll know the difference between credible and non-credible news sources.
Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI's GPT2, with high accuracy.
Deepfakes' most menacing consequence is their ability to make us question what we are seeing. The more popular deepfake technology gets, the less we will be able to trust our own eyes.
Language scholars and machine learning specialists collaborated to create a new application that can detect Twitter bots independent of the language used. The detection of bots will help in decreasing the spread of fake news.
The spread of Deepfakes via social media platforms leads to disinformation and misinformation. There are ways in which the government and social media companies can prevent to prevent Deepfakes.
Ad fraud is becoming more common among websites. Ad fraud can help fraudsters to generate revenue for themselves through fake traffic, fake clicks and fake installs. It can also help the cybercriminals to deploy malware on users' computers.
Psychologist and Principal Research Scientist at Forecepoint, Dr. Margaret Cunningham, conducted a study in which she examined the impacts of six different unconscious human biases on decision-making in cybersecurity. Awareness and understanding surrounding cognitive biases in the realm of cybersecurity should be increased in order to reduce biased decision-making in the performance of activities such as threat analysis and prevent the design of systems that perpetuate biases.
The Knight Foundation performed an analysis on the spread of fake news via Twitter before and after the 2016 U.S. election campaign. Evidence suggests that most accounts used to spread fake or conspiracy news during this time were bots or semi-automated accounts.
Mathematicians revealed a mathematical model of fake news. This model can be used to help lawmakers mitigate the impact of fake news.
Twitter bots played a significant role in the spread of misinformation during the 2016 U.S. presidential election. People often deem messages trustworthy when they appear to be shared by many sources. The research behind this discovery highlights the amplification of misinformation through the use of bots.
The manipulation of social media metadata by bad actors for the purpose of creating more powerful disinformation campaigns was explored. It has been argued that disinformation campaigns can be detected and combatted by understanding data craft.
This article pertains to cognitive security. It's difficult to assess how effective YouTube's policies will be, as the company didn't specify how it plans to identify the offending videos, enforce the new rules, or punish offenders.
MIT researchers conducted a study in which they examined automated fake-news detection systems. The study highlights the need for more research into the effectiveness of fake-news detectors.
Researchers at the University of Michigan developed an algorithm-based system that can identify fake news stories based on linguistic cues. The system was found to be better at finding fakes than humans.
A study done by researchers at Ohio University calls for the improvement of information literacy as it was found that most people do not take time to verify whether information is accurate or not before sharing it on social media. The study uses information literacy factors and a theoretical lens to help develop an understanding of why people share "fake news" on social media.
This article pertains to cognitive security. YouTube is going to remove videos that deny the Holocaust, other "well-documented violent events, and videos that glorify Nazi ideology or that promote groups that claim superiority to others to justify several forms of discrimination.
The Anti-Defamation League (ADL) partnered with the Network Contagion Research Institute (NCRI) to examine the ways in which extremism and hate is spread on social media. The partnership is also in support of developing methods for combatting the spread of both.