Visible to the public Biblio

Filters: Keyword is misinformation  [Clear All Filters]
2023-07-21
Concepcion, A. R., Sy, C..  2022.  A System Dynamics Model of False News on Social Networking Sites. 2022 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). :0786—0790.
Over the years, false news has polluted the online media landscape across the world. In this “post-truth” era, the narratives created by false news have now come into fruition through dismantled democracies, disbelief in science, and hyper-polarized societies. Despite increased efforts in fact-checking & labeling, strengthening detection systems, de-platforming powerful users, promoting media literacy and awareness of the issue, false news continues to be spread exponentially. This study models the behaviors of both the victims of false news and the platform in which it is spread— through the system dynamics methodology. The model was used to develop a policy design by evaluating existing and proposed solutions. The results recommended actively countering confirmation bias, restructuring social networking sites’ recommendation algorithms, and increasing public trust in news organizations.
2023-02-17
Caramancion, Kevin Matthe.  2022.  An Exploration of Mis/Disinformation in Audio Format Disseminated in Podcasts: Case Study of Spotify. 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). :1–6.
This paper examines audio-based social networking platforms and how their environments can affect the persistence of fake news and mis/disinformation in the whole information ecosystem. This is performed through an exploration of their features and how they compare to that of general-purpose multimodal platforms. A case study on Spotify and its recent issue on free speech and misinformation is the application area of this paper. As a supplementary, a demographic analysis of the current statistics of podcast streamers is outlined to give an overview of the target audience of possible deception attacks in the future. As for the conclusion, this paper confers a recommendation to policymakers and experts in preparing for future mis-affordance of the features in social environments that may unintentionally give the agents of mis/disinformation prowess to create and sow discord and deception.
2020-07-13
Paschalides, Demetris, Christodoulou, Chrysovalantis, Andreou, Rafael, Pallis, George, Dikaiakos, Marios D., Kornilakis, Alexandros, Markatos, Evangelos.  2019.  Check-It: A plugin for Detecting and Reducing the Spread of Fake News and Misinformation on the Web. 2019 IEEE/WIC/ACM International Conference on Web Intelligence (WI). :298–302.
Over the past few years, we have been witnessing the rise of misinformation on the Internet. People fall victims of fake news continuously, and contribute to their propagation knowingly or inadvertently. Many recent efforts seek to reduce the damage caused by fake news by identifying them automatically with artificial intelligence techniques, using signals from domain flag-lists, online social networks, etc. In this work, we present Check-It, a system that combines a variety of signals into a pipeline for fake news identification. Check-It is developed as a web browser plugin with the objective of efficient and timely fake news detection, while respecting user privacy. In this paper, we present the design, implementation and performance evaluation of Check-It. Experimental results show that it outperforms state-of-the-art methods on commonly-used datasets.
2020-04-13
Horne, Benjamin D., Gruppi, Mauricio, Adali, Sibel.  2019.  Trustworthy Misinformation Mitigation with Soft Information Nudging. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :245–254.

Research in combating misinformation reports many negative results: facts may not change minds, especially if they come from sources that are not trusted. Individuals can disregard and justify lies told by trusted sources. This problem is made even worse by social recommendation algorithms which help amplify conspiracy theories and information confirming one's own biases due to companies' efforts to optimize for clicks and watch time over individuals' own values and public good. As a result, more nuanced voices and facts are drowned out by a continuous erosion of trust in better information sources. Most misinformation mitigation techniques assume that discrediting, filtering, or demoting low veracity information will help news consumers make better information decisions. However, these negative results indicate that some news consumers, particularly extreme or conspiracy news consumers will not be helped. We argue that, given this background, technology solutions to combating misinformation should not simply seek facts or discredit bad news sources, but instead use more subtle nudges towards better information consumption. Repeated exposure to such nudges can help promote trust in better information sources and also improve societal outcomes in the long run. In this article, we will talk about technological solutions that can help us in developing such an approach, and introduce one such model called Trust Nudging.