Visible to the public Biblio

Found 212 results

2019-09-12
Kimberly Ferguson-Walter, Sunny Fugate, Justin Mauger, Maxine Major.  2019.  Game Theory for Adaptive Defensive Cyber Deception. ACM Digital Library.

As infamous hacker Kevin Mitnick describes in his book The Art of Deception, "the human factor is truly security's weakest link". Deception has been widely successful when used by hackers for social engineering and by military strategists in kinetic warfare [26]. Deception affects the human's beliefs, decisions, and behaviors. Similarly, as cyber defenders, deception is a powerful tool that should be employed to protect our systems against humans who wish to penetrate, attack, and harm them.

Sarah Cooney, Phebe Vayanos, Thanh H. Nguyen, Cleotilde Gonzalez, Christian Lebiere, Edward A. Cranford, Milind Tambe.  2019.  Warning Time: Optimizing Strategic Signaling for Security Against Boundedly Rational Adversaries. Team Core USC.

Defender-attacker Stackelberg security games (SSGs) have been applied for solving many real-world security problems. Recent work in SSGs has incorporated a deceptive signaling scheme into the SSG model, where the defender strategically reveals information about her defensive strategy to the attacker, in order to influence the attacker’s decision making for the defender’s own benefit. In this work, we study the problem of signaling in security games against a boundedly rational attacker. 

Shari Lawrence Pfleegera, Deanna Caputo.  2012.  Leveraging behavioral science to mitigate cyber security risk. Science Direct. 31(4):597-611.

Most efforts to improve cyber security focus primarily on incorporating new technological approaches in products and processes. However, a key element of improvement involves acknowledging the importance of human behavior when designing, building and using cyber security technology. In this survey paper, we describe why incorporating an understanding of human behavior into cyber security products and processes can lead to more effective technology. We present two examples: the first demonstrates how leveraging behavioral science leads to clear improvements, and the other illustrates how behavioral science offers the potential for significant increases in the effectiveness of cyber security. Based on feedback collected from practitioners in preliminary interviews, we narrow our focus to two important behavioral aspects: cognitive load and bias. Next, we identify proven and potential behavioral science findings that have cyber security relevance, not only related to cognitive load and bias but also to heuristics and behavioral science models. We conclude by suggesting several next steps for incorporating behavioral science findings in our technological design, development and use. 

2019-09-11
[Anonymous].  2019.  El Paso and Dayton Tragedy-Related Scams and Malware Campaigns. CISA.

In the wake of the recent shootings in El Paso, TX, and Dayton, OH, the Cybersecurity and Infrastructure Security Agency (CISA) advises users to watch out for possible malicious cyber activity seeking to capitalize on these tragic events. Users should exercise caution in handling emails related to the shootings, even if they appear to originate from trusted sources. It is common for hackers to try to capitalize on horrible events that occur to perform phishing attacks.

Lucas Ropek.  2019.  Social Engineering Attack Nets $1.7M in Government Funds. Government Technology.

Social engineering, the act of manipulating someone into a specific action through online deception. According to Norton, social engineering attempts typically take one of several forms, including phishing, impersonation and various types of baiting. Social Engineering attacks are on the rise, according to the FBI, which reportedly received some 20,373 complaints in 2018 alone. Those complaints amount to $1.2 billion in overall losses.

[Anonymous].  2019.  Millions of fake businesses list on Google Maps. WARC News.

Google handles more than 90% of the world's online search queries, generating billions in advertising revenue, yet it has emerged that ad-supported Google Maps includes an estimated 11 million falsely listed businesses on any given day.

Chris Bing.  2018.  Winter Olympics hack shows how advanced groups can fake attribution. Cyber Scoop.

A malware attack that disrupted the opening ceremony of the 2018 Winter Olympics highlights false flag operations. The malware called the "Olympic Destroyer" contained code deriving from other well-known attacks launched by different hacking groups. This lead different cybersecurity companies to accuse Russia, North Korea, Iran, or China.

James Sanders.  2018.  Attackers are using cloud services to mask attack origin and build false trust. Tech Republic.

According to a report released by Menlo Security, the padlock in a browser's URL bar gives users a false sense of security as cloud hosting services are being used by attackers to host malware droppers. The use of this tactic allows attackers to hide the origin of their attacks and further evade detection. The exploitation of trust is a major component of such attacks.

Nicole Lee.  2019.  Google’s new curriculum teaches kids how to detect disinformation. Engadget.

The curriculum includes "Don't Fall for Fake" activities that are centered around teaching children critical thinking skills. This is so they'll know the difference between credible and non-credible news sources.

Devin Coldewey.  2019.  To Detect Fake News, This AI First Learned to Write it. Tech Crunch.

Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI's GPT2, with high accuracy.

Caleb Townsend.  2019.  Deepfake Technology: Implications for the Future. U.S. Cybersecurity Magazine.

Deepfakes' most menacing consequence is their ability to make us question what we are seeing. The more popular deepfake technology gets, the less we will be able to trust our own eyes.

[Anonymous].  2019.  Researchers develop app to detect Twitter bots in any language. Help Net Security.

Language scholars and machine learning specialists collaborated to create a new application that can detect Twitter bots independent of the language used. The detection of bots will help in decreasing the spread of fake news.

Clint Watts.  2019.  The National Security Challenges of Artificial Intelligence, Manipulated Media, and 'Deepfakes'. Foreign Policy Research Institute.

The spread of Deepfakes via social media platforms leads to disinformation and misinformation. There are ways in which the government and social media companies can prevent to prevent Deepfakes.

2019-09-10
[Anonymous].  2019.  What is digital ad fraud and how does it work? Cyware.

Ad fraud is becoming more common among websites. Ad fraud can help fraudsters to generate revenue for themselves through fake traffic, fake clicks and fake installs. It can also help the cybercriminals to deploy malware on users' computers.

Zeljka Zorz.  2019.  How human bias impacts cybersecurity decision making. Help Net Security.

Psychologist and Principal Research Scientist at Forecepoint, Dr. Margaret Cunningham, conducted a study in which she examined the impacts of six different unconscious human biases on decision-making in cybersecurity. Awareness and understanding surrounding cognitive biases in the realm of cybersecurity should be increased in order to reduce biased decision-making in the performance of activities such as threat analysis and prevent the design of systems that perpetuate biases.

[Anonymous].  2018.  Disinformation, 'Fake News' and Influence Campaigns on Twitter. Knight Foundation.

The Knight Foundation performed an analysis on the spread of fake news via Twitter before and after the 2016 U.S. election campaign. Evidence suggests that most accounts used to spread fake or conspiracy news during this time were bots or semi-automated accounts.

Dorje Brody, David Meier.  2018.  Mathematicians to Help Solve the Fake News Voting Conundrum. University of Surrey News.

Mathematicians revealed a mathematical model of fake news. This model can be used to help lawmakers mitigate the impact of fake news.

Filippo Menczer.  2018.  Study: Twitter bots played disproportionate role spreading misinformation during 2016 election. News at IU Bloomington.

Twitter bots played a significant role in the spread of misinformation during the 2016 U.S. presidential election. People often deem messages trustworthy when they appear to be shared by many sources. The research behind this discovery highlights the amplification of misinformation through the use of bots.

Amelia Acker.  2018.  Data craft: the manipulation of social media metadata. Analysis and Policy Observatory.

The manipulation of social media metadata by bad actors for the purpose of creating more powerful disinformation campaigns was explored. It has been argued that disinformation campaigns can be detected and combatted by understanding data craft.

Paris Martineau.  2019.  YouTube Is Banning Extremist Videos. Will It Work? Wired.

This article pertains to cognitive security. It's difficult to assess how effective YouTube's policies will be, as the company didn't specify how it plans to identify the offending videos, enforce the new rules, or punish offenders.

[Anonymous].  2019.  Peering under the hood of fake-news detectors. Science Daily.

MIT researchers conducted a study in which they examined automated fake-news detection systems. The study highlights the need for more research into the effectiveness of fake-news detectors.

Rada Mihalcea.  2018.  Fake news detector algorithm works better than a human. University of Michigan News.

Researchers at the University of Michigan developed an algorithm-based system that can identify fake news stories based on linguistic cues. The system was found to be better at finding fakes than humans.

Mikaela Ashburn.  2019.  Ohio University study states that information literacy must be improved to stop spread of ‘fake news’. Ohio University News.

A study done by researchers at Ohio University calls for the improvement of information literacy as it was found that most people do not take time to verify whether information is accurate or not before sharing it on social media. The study uses information literacy factors and a theoretical lens to help develop an understanding of why people share "fake news" on social media.

Paresh Dave.  2019.  https://www.reuters.com/article/us-alphabet-youtube-hatespeech-idUSKCN1T623X. Reuters.

This article pertains to cognitive security. YouTube is going to remove videos that deny the Holocaust, other "well-documented violent events, and videos that glorify Nazi ideology or that promote groups that claim superiority to others to justify several forms of discrimination.

[Anonymous].  2019.  ADL Partners with Network Contagion Research Institute to Study How Hate and Extremism Spread on Social Media. ADL.

The Anti-Defamation League (ADL) partnered with the Network Contagion Research Institute (NCRI) to examine the ways in which extremism and hate is spread on social media. The partnership is also in support of developing methods for combatting the spread of both.