Visible to the public Biblio

Filters: Keyword is fake news  [Clear All Filters]
2023-06-29
Wang, Zhichao.  2022.  Deep Learning Methods for Fake News Detection. 2022 IEEE 2nd International Conference on Data Science and Computer Application (ICDSCA). :472–475.

Nowadays, although it is much more convenient to obtain news with social media and various news platforms, the emergence of all kinds of fake news has become a headache and urgent problem that needs to be solved. Currently, the fake news recognition algorithm for fake news mainly uses GCN, including some other niche algorithms such as GRU, CNN, etc. Although all fake news verification algorithms can reach quite a high accuracy with sufficient datasets, there is still room for improvement for unsupervised learning and semi-supervised. This article finds that the accuracy of the GCN method for fake news detection is basically about 85% through comparison with other neural network models, which is satisfactory, and proposes that the current field lacks a unified training dataset, and that in the future fake news detection models should focus more on semi-supervised learning and unsupervised learning.

Mahara, Govind Singh, Gangele, Sharad.  2022.  Fake news detection: A RNN-LSTM, Bi-LSTM based deep learning approach. 2022 IEEE 1st International Conference on Data, Decision and Systems (ICDDS). :01–06.

Fake news is a new phenomenon that promotes misleading information and fraud via internet social media or traditional news sources. Fake news is readily manufactured and transmitted across numerous social media platforms nowadays, and it has a significant influence on the real world. It is vital to create effective algorithms and tools for detecting misleading information on social media platforms. Most modern research approaches for identifying fraudulent information are based on machine learning, deep learning, feature engineering, graph mining, image and video analysis, and newly built datasets and online services. There is a pressing need to develop a viable approach for readily detecting misleading information. The deep learning LSTM and Bi-LSTM model was proposed as a method for detecting fake news, In this work. First, the NLTK toolkit was used to remove stop words, punctuation, and special characters from the text. The same toolset is used to tokenize and preprocess the text. Since then, GLOVE word embeddings have incorporated higher-level characteristics of the input text extracted from long-term relationships between word sequences captured by the RNN-LSTM, Bi-LSTM model to the preprocessed text. The proposed model additionally employs dropout technology with Dense layers to improve the model's efficiency. The proposed RNN Bi-LSTM-based technique obtains the best accuracy of 94%, and 93% using the Adam optimizer and the Binary cross-entropy loss function with Dropout (0.1,0.2), Once the Dropout range increases it decreases the accuracy of the model as it goes 92% once Dropout (0.3).

Kanagavalli, N., Priya, S. Baghavathi, D, Jeyakumar.  2022.  Design of Hyperparameter Tuned Deep Learning based Automated Fake News Detection in Social Networking Data. 2022 6th International Conference on Computing Methodologies and Communication (ICCMC). :958–963.

Recently, social networks have become more popular owing to the capability of connecting people globally and sharing videos, images and various types of data. A major security issue in social media is the existence of fake accounts. It is a phenomenon that has fake accounts that can be frequently utilized by mischievous users and entities, which falsify, distribute, and duplicate fake news and publicity. As the fake news resulted in serious consequences, numerous research works have focused on the design of automated fake accounts and fake news detection models. In this aspect, this study designs a hyperparameter tuned deep learning based automated fake news detection (HDL-FND) technique. The presented HDL-FND technique accomplishes the effective detection and classification of fake news. Besides, the HDLFND process encompasses a three stage process namely preprocessing, feature extraction, and Bi-Directional Long Short Term Memory (BiLSTM) based classification. The correct way of demonstrating the promising performance of the HDL-FND technique, a sequence of replications were performed on the available Kaggle dataset. The investigational outcomes produce improved performance of the HDL-FND technique in excess of the recent approaches in terms of diverse measures.

Rahman, Md. Shahriar, Ashraf, Faisal Bin, Kabir, Md. Rayhan.  2022.  An Efficient Deep Learning Technique for Bangla Fake News Detection. 2022 25th International Conference on Computer and Information Technology (ICCIT). :206–211.

People connect with a plethora of information from many online portals due to the availability and ease of access to the internet and electronic communication devices. However, news portals sometimes abuse press freedom by manipulating facts. Most of the time, people are unable to discriminate between true and false news. It is difficult to avoid the detrimental impact of Bangla fake news from spreading quickly through online channels and influencing people’s judgment. In this work, we investigated many real and false news pieces in Bangla to discover a common pattern for determining if an article is disseminating incorrect information or not. We developed a deep learning model that was trained and validated on our selected dataset. For learning, the dataset contains 48,678 legitimate news and 1,299 fraudulent news. To deal with the imbalanced data, we used random undersampling and then ensemble to achieve the combined output. In terms of Bangla text processing, our proposed model achieved an accuracy of 98.29% and a recall of 99%.

Matheven, Anand, Kumar, Burra Venkata Durga.  2022.  Fake News Detection Using Deep Learning and Natural Language Processing. 2022 9th International Conference on Soft Computing & Machine Intelligence (ISCMI). :11–14.

The rise of social media has brought the rise of fake news and this fake news comes with negative consequences. With fake news being such a huge issue, efforts should be made to identify any forms of fake news however it is not so simple. Manually identifying fake news can be extremely subjective as determining the accuracy of the information in a story is complex and difficult to perform, even for experts. On the other hand, an automated solution would require a good understanding of NLP which is also complex and may have difficulties producing an accurate output. Therefore, the main problem focused on this project is the viability of developing a system that can effectively and accurately detect and identify fake news. Finding a solution would be a significant benefit to the media industry, particularly the social media industry as this is where a large proportion of fake news is published and spread. In order to find a solution to this problem, this project proposed the development of a fake news identification system using deep learning and natural language processing. The system was developed using a Word2vec model combined with a Long Short-Term Memory model in order to showcase the compatibility of the two models in a whole system. This system was trained and tested using two different dataset collections that each consisted of one real news dataset and one fake news dataset. Furthermore, three independent variables were chosen which were the number of training cycles, data diversity and vector size to analyze the relationship between these variables and the accuracy levels of the system. It was found that these three variables did have a significant effect on the accuracy of the system. From this, the system was then trained and tested with the optimal variables and was able to achieve the minimum expected accuracy level of 90%. The achieving of this accuracy levels confirms the compatibility of the LSTM and Word2vec model and their capability to be synergized into a single system that is able to identify fake news with a high level of accuracy.

ISSN: 2640-0146

2023-05-12
Cavorsi, Matthew, Gil, Stephanie.  2022.  Providing Local Resilience to Vulnerable Areas in Robotic Networks. 2022 International Conference on Robotics and Automation (ICRA). :4929–4935.
We study how information flows through a multi-robot network in order to better understand how to provide resilience to malicious information. While the notion of global resilience is well studied, one way existing methods provide global resilience is by bringing robots closer together to improve the connectivity of the network. However, large changes in network structure can impede the team from performing other functions such as coverage, where the robots need to spread apart. Our goal is to mitigate the trade-off between resilience and network structure preservation by applying resilience locally in areas of the network where it is needed most. We introduce a metric, Influence, to identify vulnerable regions in the network requiring resilience. We design a control law targeting local resilience to the vulnerable areas by improving the connectivity of robots within these areas so that each robot has at least 2F+1 vertex-disjoint communication paths between itself and the high influence robot in the vulnerable area. We demonstrate the performance of our local resilience controller in simulation and in hardware by applying it to a coverage problem and comparing our results with an existing global resilience strategy. For the specific hardware experiments, we show that our control provides local resilience to vulnerable areas in the network while only requiring 9.90% and 15.14% deviations from the desired team formation compared to the global strategy.
2023-02-17
Caramancion, Kevin Matthe.  2022.  An Exploration of Mis/Disinformation in Audio Format Disseminated in Podcasts: Case Study of Spotify. 2022 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS). :1–6.
This paper examines audio-based social networking platforms and how their environments can affect the persistence of fake news and mis/disinformation in the whole information ecosystem. This is performed through an exploration of their features and how they compare to that of general-purpose multimodal platforms. A case study on Spotify and its recent issue on free speech and misinformation is the application area of this paper. As a supplementary, a demographic analysis of the current statistics of podcast streamers is outlined to give an overview of the target audience of possible deception attacks in the future. As for the conclusion, this paper confers a recommendation to policymakers and experts in preparing for future mis-affordance of the features in social environments that may unintentionally give the agents of mis/disinformation prowess to create and sow discord and deception.
2023-01-05
Yang, Haonan, Zhong, Yongchao, Yang, Bo, Yang, Yiyu, Xu, Zifeng, Wang, Longjuan, Zhang, Yuqing.  2022.  An Overview of Sybil Attack Detection Mechanisms in VFC. 2022 52nd Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W). :117–122.
Vehicular Fog Computing (VFC) has been proposed to address the security and response time issues of Vehicular Ad Hoc Networks (VANETs) in latency-sensitive vehicular network environments, due to the frequent interactions that VANETs need to have with cloud servers. However, the anonymity protection mechanism in VFC may cause the attacker to launch Sybil attacks by fabricating or creating multiple pseudonyms to spread false information in the network, which poses a severe security threat to the vehicle driving. Therefore, in this paper, we summarize different types of Sybil attack detection mechanisms in VFC for the first time, and provide a comprehensive comparison of these schemes. In addition, we also summarize the possible impacts of different types of Sybil attacks on VFC. Finally, we summarize challenges and prospects of future research on Sybil attack detection mechanisms in VFC.
2020-08-28
Traylor, Terry, Straub, Jeremy, Gurmeet, Snell, Nicholas.  2019.  Classifying Fake News Articles Using Natural Language Processing to Identify In-Article Attribution as a Supervised Learning Estimator. 2019 IEEE 13th International Conference on Semantic Computing (ICSC). :445—449.

Intentionally deceptive content presented under the guise of legitimate journalism is a worldwide information accuracy and integrity problem that affects opinion forming, decision making, and voting patterns. Most so-called `fake news' is initially distributed over social media conduits like Facebook and Twitter and later finds its way onto mainstream media platforms such as traditional television and radio news. The fake news stories that are initially seeded over social media platforms share key linguistic characteristics such as making excessive use of unsubstantiated hyperbole and non-attributed quoted content. In this paper, the results of a fake news identification study that documents the performance of a fake news classifier are presented. The Textblob, Natural Language, and SciPy Toolkits were used to develop a novel fake news detector that uses quoted attribution in a Bayesian machine learning system as a key feature to estimate the likelihood that a news article is fake. The resultant process precision is 63.333% effective at assessing the likelihood that an article with quotes is fake. This process is called influence mining and this novel technique is presented as a method that can be used to enable fake news and even propaganda detection. In this paper, the research process, technical analysis, technical linguistics work, and classifier performance and results are presented. The paper concludes with a discussion of how the current system will evolve into an influence mining system.

2020-07-06
Balouchestani, Arian, Mahdavi, Mojtaba, Hallaj, Yeganeh, Javdani, Delaram.  2019.  SANUB: A new method for Sharing and Analyzing News Using Blockchain. 2019 16th International ISC (Iranian Society of Cryptology) Conference on Information Security and Cryptology (ISCISC). :139–143.
Millions of news are being exchanged daily among people. With the appearance of the Internet, the way of broadcasting news has changed and become faster, however it caused many problems. For instance, the increase in the speed of broadcasting news leads to an increase in the speed of fake news creation. Fake news can have a huge impression on societies. Additionally, the existence of a central entity, such as news agencies, could lead to fraud in the news broadcasting process, e.g. generating fake news and publishing them for their benefits. Since Blockchain technology provides a reliable decentralized network, it can be used to publish news. In addition, Blockchain with the help of decentralized applications and smart contracts can provide a platform in which fake news can be detected through public participation. In this paper, we proposed a new method for sharing and analyzing news to detect fake news using Blockchain, called SANUB. SANUB provides features such as publishing news anonymously, news evaluation, reporter validation, fake news detection and proof of news ownership. The results of our analysis show that SANUB outperformed the existing methods.
2019-03-04
Aborisade, O., Anwar, M..  2018.  Classification for Authorship of Tweets by Comparing Logistic Regression and Naive Bayes Classifiers. 2018 IEEE International Conference on Information Reuse and Integration (IRI). :269–276.

At a time when all it takes to open a Twitter account is a mobile phone, the act of authenticating information encountered on social media becomes very complex, especially when we lack measures to verify digital identities in the first place. Because the platform supports anonymity, fake news generated by dubious sources have been observed to travel much faster and farther than real news. Hence, we need valid measures to identify authors of misinformation to avert these consequences. Researchers propose different authorship attribution techniques to approach this kind of problem. However, because tweets are made up of only 280 characters, finding a suitable authorship attribution technique is a challenge. This research aims to classify authors of tweets by comparing machine learning methods like logistic regression and naive Bayes. The processes of this application are fetching of tweets, pre-processing, feature extraction, and developing a machine learning model for classification. This paper illustrates the text classification for authorship process using machine learning techniques. In total, there were 46,895 tweets used as both training and testing data, and unique features specific to Twitter were extracted. Several steps were done in the pre-processing phase, including removal of short texts, removal of stop-words and punctuations, tokenizing and stemming of texts as well. This approach transforms the pre-processed data into a set of feature vector in Python. Logistic regression and naive Bayes algorithms were applied to the set of feature vectors for the training and testing of the classifier. The logistic regression based classifier gave the highest accuracy of 91.1% compared to the naive Bayes classifier with 89.8%.