Visible to the public Biblio

Found 107 results

Filters: Keyword is Cognitive Security in Cyber  [Clear All Filters]
2019-09-11
Clint Watts.  2019.  The National Security Challenges of Artificial Intelligence, Manipulated Media, and 'Deepfakes'. Foreign Policy Research Institute.

The spread of Deepfakes via social media platforms leads to disinformation and misinformation. There are ways in which the government and social media companies can prevent to prevent Deepfakes.

2019-09-10
[Anonymous].  2019.  What is digital ad fraud and how does it work? Cyware.

Ad fraud is becoming more common among websites. Ad fraud can help fraudsters to generate revenue for themselves through fake traffic, fake clicks and fake installs. It can also help the cybercriminals to deploy malware on users' computers.

Zeljka Zorz.  2019.  How human bias impacts cybersecurity decision making. Help Net Security.

Psychologist and Principal Research Scientist at Forecepoint, Dr. Margaret Cunningham, conducted a study in which she examined the impacts of six different unconscious human biases on decision-making in cybersecurity. Awareness and understanding surrounding cognitive biases in the realm of cybersecurity should be increased in order to reduce biased decision-making in the performance of activities such as threat analysis and prevent the design of systems that perpetuate biases.

[Anonymous].  2018.  Disinformation, 'Fake News' and Influence Campaigns on Twitter. Knight Foundation.

The Knight Foundation performed an analysis on the spread of fake news via Twitter before and after the 2016 U.S. election campaign. Evidence suggests that most accounts used to spread fake or conspiracy news during this time were bots or semi-automated accounts.

Dorje Brody, David Meier.  2018.  Mathematicians to Help Solve the Fake News Voting Conundrum. University of Surrey News.

Mathematicians revealed a mathematical model of fake news. This model can be used to help lawmakers mitigate the impact of fake news.

Filippo Menczer.  2018.  Study: Twitter bots played disproportionate role spreading misinformation during 2016 election. News at IU Bloomington.

Twitter bots played a significant role in the spread of misinformation during the 2016 U.S. presidential election. People often deem messages trustworthy when they appear to be shared by many sources. The research behind this discovery highlights the amplification of misinformation through the use of bots.

Amelia Acker.  2018.  Data craft: the manipulation of social media metadata. Analysis and Policy Observatory.

The manipulation of social media metadata by bad actors for the purpose of creating more powerful disinformation campaigns was explored. It has been argued that disinformation campaigns can be detected and combatted by understanding data craft.

Paris Martineau.  2019.  YouTube Is Banning Extremist Videos. Will It Work? Wired.

This article pertains to cognitive security. It's difficult to assess how effective YouTube's policies will be, as the company didn't specify how it plans to identify the offending videos, enforce the new rules, or punish offenders.

[Anonymous].  2019.  Peering under the hood of fake-news detectors. Science Daily.

MIT researchers conducted a study in which they examined automated fake-news detection systems. The study highlights the need for more research into the effectiveness of fake-news detectors.

Rada Mihalcea.  2018.  Fake news detector algorithm works better than a human. University of Michigan News.

Researchers at the University of Michigan developed an algorithm-based system that can identify fake news stories based on linguistic cues. The system was found to be better at finding fakes than humans.

Mikaela Ashburn.  2019.  Ohio University study states that information literacy must be improved to stop spread of ‘fake news’. Ohio University News.

A study done by researchers at Ohio University calls for the improvement of information literacy as it was found that most people do not take time to verify whether information is accurate or not before sharing it on social media. The study uses information literacy factors and a theoretical lens to help develop an understanding of why people share "fake news" on social media.

Paresh Dave.  2019.  https://www.reuters.com/article/us-alphabet-youtube-hatespeech-idUSKCN1T623X. Reuters.

This article pertains to cognitive security. YouTube is going to remove videos that deny the Holocaust, other "well-documented violent events, and videos that glorify Nazi ideology or that promote groups that claim superiority to others to justify several forms of discrimination.

[Anonymous].  2019.  ADL Partners with Network Contagion Research Institute to Study How Hate and Extremism Spread on Social Media. ADL.

The Anti-Defamation League (ADL) partnered with the Network Contagion Research Institute (NCRI) to examine the ways in which extremism and hate is spread on social media. The partnership is also in support of developing methods for combatting the spread of both.

[Anonymous].  2019.  Can AI help to end fake news? Horizon Magazine.

Artificial intelligence (AI) has been used in the generation of deep fakes. However, researchers have shown that AI can be used to fight misinformation.

Jeff Grabmeier.  2019.  Tech fixes can’t protect us from disinformation campaigns. Ohio State News.

Experts at Ohio State University suggest that policymakers and diplomats further explore the psychological aspects associated with disinformation campaigns in order to stop the spread of false information on social media platforms by countries. More focus needs to be placed on why people fall for "fake news".

Jeff Stone.  2019.  Russian Twitter bots lay dormant for months before impersonating activists. Cyber Scoop.

This article pertains to cognitive security. Twitter accounts deployed by Russia's troll factory in 2016 didn't only spread disinformation meant to influence the U.S. presidential election. A small handful tried making money too.

Rachel Alter, Tonay Flattum-Riemers.  2019.  Breaking Down the Anti-Vaccine Echo Chamber. State of the Planet.

Social media echo chambers in which beliefs are significantly amplified and opposing views are easily blocked can have real-life consequences. Communication between groups should still take place despite differences in views. Blame has been placed on those who seek to profit off ignorance and fear for the growth of echo chambers in relation to the anti-vaccination movement.

Peter Dizikes.  2019.  Want to squelch fake news? Let the readers take charge MIT News.

An MIT study suggests the use of crowdsourcing to devalue false news stories and misinformation online. Despite differences in political opinions, all groups can agree that fake and hyperpartisan sites are untrustworthy.

Kaveh Waddell.  2019.  The 2020 campaigns aren't ready for deepfakes. Axios.

There is expected to be a surge in deepfakes during the 2020 presidential campaigns. According to experts, little has been done to prepare for fake videos in which candidates are depicted unfavorably in order to sway public perception.

Shannon Vavra.  2019.  Middle East-linked social media accounts impersonated U.S. candidates before 2018 elections. Cyber Scoop.

This article pertains to cognitive security and human behavior. Social media users with ties to Iran are shifting their disinformation efforts by imitating real people, including U.S. congressional candidates.

[Anonymous].  2019.  Sprawling disinformation networks discovered across Europe ahead of EU elections. Homeland Security News Wire.

A U.K.-based global citizen activist organization, called Avaaz, conducted an investigation, which revealed the spread of disinformation within Europe via Facebook ahead of EU elections. According to Avaaz, these pages were found to be posting false and misleading content. These disinformation networks are considered to be weapons as they are significant in size and complexity.

Ian Bogost.  2019.  Facebook’s Dystopian Definition of ‘Fake’. The Atlantic.

Facebook's response to a altered video of Nancy Pelosi has sparked a debate as to whether social media platforms should take down videos that are considered to be "fake". The definition of "fake" is also discussed.

Tom Warren.  2019.  Microsoft is trying to fight fake news with its Edge mobile browser. The Verge.

This article pertains to cognitive security. Microsoft Edge mobile browser will use software called NewsGuard, which will rate sites based on a variety of criteria including their use of deceptive headlines, whether they repeatedly publish false content, and transparency regarding ownership and financing.

Maria Temming.  2018.  People are bad at spotting fake news. Can computer programs do better? Science News.

This article pertains to cognitive security. To help sort fake news from truth, programmers are building automated systems that judge the veracity of online stories.

Peter Dizikes.  2019.  Could this be the solution to stop the spread of fake news? World Economic Forum.

This article pertains to cognitive security. False news is becoming a growing problem. During a study, it was found that a crowdsourcing approach could help detect fake news sources.