Biblio
According to a report released by Menlo Security, the padlock in a browser's URL bar gives users a false sense of security as cloud hosting services are being used by attackers to host malware droppers. The use of this tactic allows attackers to hide the origin of their attacks and further evade detection. The exploitation of trust is a major component of such attacks.
The curriculum includes "Don't Fall for Fake" activities that are centered around teaching children critical thinking skills. This is so they'll know the difference between credible and non-credible news sources.
Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI's GPT2, with high accuracy.
Deepfakes' most menacing consequence is their ability to make us question what we are seeing. The more popular deepfake technology gets, the less we will be able to trust our own eyes.
Language scholars and machine learning specialists collaborated to create a new application that can detect Twitter bots independent of the language used. The detection of bots will help in decreasing the spread of fake news.
The spread of Deepfakes via social media platforms leads to disinformation and misinformation. There are ways in which the government and social media companies can prevent to prevent Deepfakes.
Ad fraud is becoming more common among websites. Ad fraud can help fraudsters to generate revenue for themselves through fake traffic, fake clicks and fake installs. It can also help the cybercriminals to deploy malware on users' computers.
Psychologist and Principal Research Scientist at Forecepoint, Dr. Margaret Cunningham, conducted a study in which she examined the impacts of six different unconscious human biases on decision-making in cybersecurity. Awareness and understanding surrounding cognitive biases in the realm of cybersecurity should be increased in order to reduce biased decision-making in the performance of activities such as threat analysis and prevent the design of systems that perpetuate biases.
The Knight Foundation performed an analysis on the spread of fake news via Twitter before and after the 2016 U.S. election campaign. Evidence suggests that most accounts used to spread fake or conspiracy news during this time were bots or semi-automated accounts.
Mathematicians revealed a mathematical model of fake news. This model can be used to help lawmakers mitigate the impact of fake news.
Twitter bots played a significant role in the spread of misinformation during the 2016 U.S. presidential election. People often deem messages trustworthy when they appear to be shared by many sources. The research behind this discovery highlights the amplification of misinformation through the use of bots.
The manipulation of social media metadata by bad actors for the purpose of creating more powerful disinformation campaigns was explored. It has been argued that disinformation campaigns can be detected and combatted by understanding data craft.
This article pertains to cognitive security. It's difficult to assess how effective YouTube's policies will be, as the company didn't specify how it plans to identify the offending videos, enforce the new rules, or punish offenders.
MIT researchers conducted a study in which they examined automated fake-news detection systems. The study highlights the need for more research into the effectiveness of fake-news detectors.
Researchers at the University of Michigan developed an algorithm-based system that can identify fake news stories based on linguistic cues. The system was found to be better at finding fakes than humans.
A study done by researchers at Ohio University calls for the improvement of information literacy as it was found that most people do not take time to verify whether information is accurate or not before sharing it on social media. The study uses information literacy factors and a theoretical lens to help develop an understanding of why people share "fake news" on social media.
This article pertains to cognitive security. YouTube is going to remove videos that deny the Holocaust, other "well-documented violent events, and videos that glorify Nazi ideology or that promote groups that claim superiority to others to justify several forms of discrimination.
The Anti-Defamation League (ADL) partnered with the Network Contagion Research Institute (NCRI) to examine the ways in which extremism and hate is spread on social media. The partnership is also in support of developing methods for combatting the spread of both.
Artificial intelligence (AI) has been used in the generation of deep fakes. However, researchers have shown that AI can be used to fight misinformation.
Experts at Ohio State University suggest that policymakers and diplomats further explore the psychological aspects associated with disinformation campaigns in order to stop the spread of false information on social media platforms by countries. More focus needs to be placed on why people fall for "fake news".
This article pertains to cognitive security. Twitter accounts deployed by Russia's troll factory in 2016 didn't only spread disinformation meant to influence the U.S. presidential election. A small handful tried making money too.
Social media echo chambers in which beliefs are significantly amplified and opposing views are easily blocked can have real-life consequences. Communication between groups should still take place despite differences in views. Blame has been placed on those who seek to profit off ignorance and fear for the growth of echo chambers in relation to the anti-vaccination movement.
An MIT study suggests the use of crowdsourcing to devalue false news stories and misinformation online. Despite differences in political opinions, all groups can agree that fake and hyperpartisan sites are untrustworthy.
There is expected to be a surge in deepfakes during the 2020 presidential campaigns. According to experts, little has been done to prepare for fake videos in which candidates are depicted unfavorably in order to sway public perception.
This article pertains to cognitive security and human behavior. Social media users with ties to Iran are shifting their disinformation efforts by imitating real people, including U.S. congressional candidates.