Visible to the public Biblio

Filters: Keyword is disinformation  [Clear All Filters]
2023-07-21
Concepcion, A. R., Sy, C..  2022.  A System Dynamics Model of False News on Social Networking Sites. 2022 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM). :0786—0790.
Over the years, false news has polluted the online media landscape across the world. In this “post-truth” era, the narratives created by false news have now come into fruition through dismantled democracies, disbelief in science, and hyper-polarized societies. Despite increased efforts in fact-checking & labeling, strengthening detection systems, de-platforming powerful users, promoting media literacy and awareness of the issue, false news continues to be spread exponentially. This study models the behaviors of both the victims of false news and the platform in which it is spread— through the system dynamics methodology. The model was used to develop a policy design by evaluating existing and proposed solutions. The results recommended actively countering confirmation bias, restructuring social networking sites’ recommendation algorithms, and increasing public trust in news organizations.
2023-02-17
Caramancion, Kevin Matthe.  2022.  An Exploration of Mis/Disinformation in Audio Format Disseminated in Podcasts: Case Study of Spotify. 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). :1–6.
This paper examines audio-based social networking platforms and how their environments can affect the persistence of fake news and mis/disinformation in the whole information ecosystem. This is performed through an exploration of their features and how they compare to that of general-purpose multimodal platforms. A case study on Spotify and its recent issue on free speech and misinformation is the application area of this paper. As a supplementary, a demographic analysis of the current statistics of podcast streamers is outlined to give an overview of the target audience of possible deception attacks in the future. As for the conclusion, this paper confers a recommendation to policymakers and experts in preparing for future mis-affordance of the features in social environments that may unintentionally give the agents of mis/disinformation prowess to create and sow discord and deception.
Caramancion, Kevin Matthe.  2022.  Same Form, Different Payloads: A Comparative Vector Assessment of DDoS and Disinformation Attacks. 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). :1–6.
This paper offers a comparative vector assessment of DDoS and disinformation attacks. The assessed dimensions are as follows: (1) the threat agent, (2) attack vector, (3) target, (4) impact, and (5) defense. The results revealed that disinformation attacks, anchoring on astroturfs, resemble DDoS’s zombie computers in their method of amplification. Although DDoS affects several layers of the OSI model, disinformation attacks exclusively affect the application layer. Furthermore, even though their payloads and objectives are different, their vector paths and network designs are very similar. This paper, as its conclusion, strongly recommends the classification of disinformation as an actual cybersecurity threat to eliminate the inconsistencies in policies in social networking platforms. The intended target audiences of this paper are IT and cybersecurity experts, computer and information scientists, policymakers, legal and judicial scholars, and other professionals seeking references on this matter.
2022-10-16
Guo, Zhen, Cho, Jin–Hee.  2021.  Game Theoretic Opinion Models and Their Application in Processing Disinformation. 2021 IEEE Global Communications Conference (GLOBECOM). :01–07.
Disinformation, fake news, and unverified rumors spread quickly in online social networks (OSNs) and manipulate people's opinions and decisions about life events. The solid mathematical solutions of the strategic decisions in OSNs have been provided under game theory models, including multiple roles and features. This work proposes a game-theoretic opinion framework to model subjective opinions and behavioral strategies of attackers, users, and a defender. The attackers use information deception models to disseminate disinformation. We investigate how different game-theoretic opinion models of updating people's subject opinions can influence a way for people to handle disinformation. We compare the opinion dynamics of the five different opinion models (i.e., uncertainty, homophily, assertion, herding, and encounter-based) where an opinion is formulated based on Subjective Logic that offers the capability to deal with uncertain opinions. Via our extensive experiments, we observe that the uncertainty-based opinion model shows the best performance in combating disinformation among all in that uncertainty-based decisions can significantly help users believe true information more than disinformation.
2021-02-03
Aliman, N.-M., Kester, L..  2020.  Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses. 2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR). :130—137.

Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.

2020-07-13
Mahmood, Shah.  2019.  The Anti-Data-Mining (ADM) Framework - Better Privacy on Online Social Networks and Beyond. 2019 IEEE International Conference on Big Data (Big Data). :5780–5788.
The unprecedented and enormous growth of cloud computing, especially online social networks, has resulted in numerous incidents of the loss of users' privacy. In this paper, we provide a framework, based on our anti-data-mining (ADM) principle, to enhance users' privacy against adversaries including: online social networks; search engines; financial terminal providers; ad networks; eavesdropping governments; and other parties who can monitor users' content from the point where the content leaves users' computers to within the data centers of these information accumulators. To achieve this goal, our framework proactively uses the principles of suppression of sensitive data and disinformation. Moreover, we use social-bots in a novel way for enhanced privacy and provide users' with plausible deniability for their photos, audio, and video content uploaded online.
2020-04-13
Horne, Benjamin D., Gruppi, Mauricio, Adali, Sibel.  2019.  Trustworthy Misinformation Mitigation with Soft Information Nudging. 2019 First IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :245–254.

Research in combating misinformation reports many negative results: facts may not change minds, especially if they come from sources that are not trusted. Individuals can disregard and justify lies told by trusted sources. This problem is made even worse by social recommendation algorithms which help amplify conspiracy theories and information confirming one's own biases due to companies' efforts to optimize for clicks and watch time over individuals' own values and public good. As a result, more nuanced voices and facts are drowned out by a continuous erosion of trust in better information sources. Most misinformation mitigation techniques assume that discrediting, filtering, or demoting low veracity information will help news consumers make better information decisions. However, these negative results indicate that some news consumers, particularly extreme or conspiracy news consumers will not be helped. We argue that, given this background, technology solutions to combating misinformation should not simply seek facts or discredit bad news sources, but instead use more subtle nudges towards better information consumption. Repeated exposure to such nudges can help promote trust in better information sources and also improve societal outcomes in the long run. In this article, we will talk about technological solutions that can help us in developing such an approach, and introduce one such model called Trust Nudging.