Biblio
False news has become widespread in the last decade in political, economic, and social dimensions. This has been aided by the deep entrenchment of social media networking in these dimensions. Facebook and Twitter have been known to influence the behavior of people significantly. People rely on news/information posted on their favorite social media sites to make purchase decisions. Also, news posted on mainstream and social media platforms has a significant impact on a particular country’s economic stability and social tranquility. Therefore, there is a need to develop a deceptive system that evaluates the news to avoid the repercussions resulting from the rapid dispersion of fake news on social media platforms and other online platforms. To achieve this, the proposed system uses the preprocessing stage results to assign specific vectors to words. Each vector assigned to a word represents an intrinsic characteristic of the word. The resulting word vectors are then applied to RNN models before proceeding to the LSTM model. The output of the LSTM is used to determine whether the news article/piece is fake or otherwise.
The value and size of information exchanged through dark-web pages are remarkable. Recently Many researches showed values and interests in using machine-learning methods to extract security-related useful knowledge from those dark-web pages. In this scope, our goals in this research focus on evaluating best prediction models while analyzing traffic level data coming from the dark web. Results and analysis showed that feature selection played an important role when trying to identify the best models. Sometimes the right combination of features would increase the model’s accuracy. For some feature set and classifier combinations, the Src Port and Dst Port both proved to be important features. When available, they were always selected over most other features. When absent, it resulted in many other features being selected to compensate for the information they provided. The Protocol feature was never selected as a feature, regardless of whether Src Port and Dst Port were available.
This paper deals with the problem of image forgery detection because of the problems it causes. Where The Fake im-ages can lead to social problems, for example, misleading the public opinion on political or religious personages, de-faming celebrities and people, and Presenting them in a law court as evidence, may Doing mislead the court. This work proposes a deep learning approach based on Deep CNN (Convolutional Neural Network) Architecture, to detect fake images. The network is based on a modified structure of Xception net, CNN based on depthwise separable convolution layers. After extracting the feature maps, pooling layers are used with dense connection with Xception output, to in-crease feature maps. Inspired by the idea of a densenet network. On the other hand, the work uses the YCbCr color system for images, which gave better Accuracy of %99.93, more than RGB, HSV, and Lab or other color systems.
ISSN: 2831-753X
The rise of social media has brought the rise of fake news and this fake news comes with negative consequences. With fake news being such a huge issue, efforts should be made to identify any forms of fake news however it is not so simple. Manually identifying fake news can be extremely subjective as determining the accuracy of the information in a story is complex and difficult to perform, even for experts. On the other hand, an automated solution would require a good understanding of NLP which is also complex and may have difficulties producing an accurate output. Therefore, the main problem focused on this project is the viability of developing a system that can effectively and accurately detect and identify fake news. Finding a solution would be a significant benefit to the media industry, particularly the social media industry as this is where a large proportion of fake news is published and spread. In order to find a solution to this problem, this project proposed the development of a fake news identification system using deep learning and natural language processing. The system was developed using a Word2vec model combined with a Long Short-Term Memory model in order to showcase the compatibility of the two models in a whole system. This system was trained and tested using two different dataset collections that each consisted of one real news dataset and one fake news dataset. Furthermore, three independent variables were chosen which were the number of training cycles, data diversity and vector size to analyze the relationship between these variables and the accuracy levels of the system. It was found that these three variables did have a significant effect on the accuracy of the system. From this, the system was then trained and tested with the optimal variables and was able to achieve the minimum expected accuracy level of 90%. The achieving of this accuracy levels confirms the compatibility of the LSTM and Word2vec model and their capability to be synergized into a single system that is able to identify fake news with a high level of accuracy.
ISSN: 2640-0146