Biblio
Today, major corporations and government organizations must face the reality that they will be hacked by malicious actors. In this paper, we consider the case of defending enterprises that have been successfully hacked by imposing additional a posteriori costs on the attacker. Our idea is simple: for every real document d, we develop methods to automatically generate a set Fake(d) of fake documents that are very similar to d. The attacker who steals documents must wade through a large number of documents in detail in order to separate the real one from the fakes. Our FORGE system focuses on technical documents (e.g. engineering/design documents) and involves three major innovations. First, we represent the semantic content of documents via multi-layer graphs (MLGs). Second, we propose a novel concept of “meta-centrality” for multi-layer graphs. Our third innovation is to show that the problem of generating the set Fake(d) of fakes can be viewed as an optimization problem. We prove that this problem is NP-complete and then develop efficient heuristics to solve it in practice. We ran detailed experiments with a panel of 20 human subjects and show that FORGE generates highly believable fakes.
Google handles more than 90% of the world's online search queries, generating billions in advertising revenue, yet it has emerged that ad-supported Google Maps includes an estimated 11 million falsely listed businesses on any given day.
The curriculum includes "Don't Fall for Fake" activities that are centered around teaching children critical thinking skills. This is so they'll know the difference between credible and non-credible news sources.
Naturally Grover is best at detecting its own fake articles, since in a way the agent knows its own processes. But it can also detect those made by other models, such as OpenAI's GPT2, with high accuracy.
Ad fraud is becoming more common among websites. Ad fraud can help fraudsters to generate revenue for themselves through fake traffic, fake clicks and fake installs. It can also help the cybercriminals to deploy malware on users' computers.
The Knight Foundation performed an analysis on the spread of fake news via Twitter before and after the 2016 U.S. election campaign. Evidence suggests that most accounts used to spread fake or conspiracy news during this time were bots or semi-automated accounts.
Mathematicians revealed a mathematical model of fake news. This model can be used to help lawmakers mitigate the impact of fake news.
Twitter bots played a significant role in the spread of misinformation during the 2016 U.S. presidential election. People often deem messages trustworthy when they appear to be shared by many sources. The research behind this discovery highlights the amplification of misinformation through the use of bots.
MIT researchers conducted a study in which they examined automated fake-news detection systems. The study highlights the need for more research into the effectiveness of fake-news detectors.
Researchers at the University of Michigan developed an algorithm-based system that can identify fake news stories based on linguistic cues. The system was found to be better at finding fakes than humans.
A study done by researchers at Ohio University calls for the improvement of information literacy as it was found that most people do not take time to verify whether information is accurate or not before sharing it on social media. The study uses information literacy factors and a theoretical lens to help develop an understanding of why people share "fake news" on social media.
Artificial intelligence (AI) has been used in the generation of deep fakes. However, researchers have shown that AI can be used to fight misinformation.
Experts at Ohio State University suggest that policymakers and diplomats further explore the psychological aspects associated with disinformation campaigns in order to stop the spread of false information on social media platforms by countries. More focus needs to be placed on why people fall for "fake news".
An MIT study suggests the use of crowdsourcing to devalue false news stories and misinformation online. Despite differences in political opinions, all groups can agree that fake and hyperpartisan sites are untrustworthy.
A U.K.-based global citizen activist organization, called Avaaz, conducted an investigation, which revealed the spread of disinformation within Europe via Facebook ahead of EU elections. According to Avaaz, these pages were found to be posting false and misleading content. These disinformation networks are considered to be weapons as they are significant in size and complexity.
Facebook's response to a altered video of Nancy Pelosi has sparked a debate as to whether social media platforms should take down videos that are considered to be "fake". The definition of "fake" is also discussed.
This article pertains to cognitive security. To help sort fake news from truth, programmers are building automated systems that judge the veracity of online stories.
This article pertains to cognitive security. Older users shared more fake news than younger ones regardless of education, sex, race, income, or how many links they shared. In fact, age predicted their behavior better than any other characteristic -- including party affiliation.