Biblio
We present an online framework for learning and updating security policies in dynamic IT environments. It includes three components: a digital twin of the target system, which continuously collects data and evaluates learned policies; a system identification process, which periodically estimates system models based on the collected data; and a policy learning process that is based on reinforcement learning. To evaluate our framework, we apply it to an intrusion prevention use case that involves a dynamic IT infrastructure. Our results demonstrate that the framework automatically adapts security policies to changes in the IT infrastructure and that it outperforms a state-of-the-art method.
Social media has beneficial and detrimental impacts on social life. The vast distribution of false information on social media has become a worldwide threat. As a result, the Fake News Detection System in Social Networks has risen in popularity and is now considered an emerging research area. A centralized training technique makes it difficult to build a generalized model by adapting numerous data sources. In this study, we develop a decentralized Deep Learning model using Federated Learning (FL) for fake news detection. We utilize an ISOT fake news dataset gathered from "Reuters.com" (N = 44,898) to train the deep learning model. The performance of decentralized and centralized models is then assessed using accuracy, precision, recall, and F1-score measures. In addition, performance was measured by varying the number of FL clients. We identify the high accuracy of our proposed decentralized FL technique (accuracy, 99.6%) utilizing fewer communication rounds than in previous studies, even without employing pre-trained word embedding. The highest effects are obtained when we compare our model to three earlier research. Instead of a centralized method for false news detection, the FL technique may be used more efficiently. The use of Blockchain-like technologies can improve the integrity and validity of news sources.
ISSN: 2577-1647
Fake news is a new phenomenon that promotes misleading information and fraud via internet social media or traditional news sources. Fake news is readily manufactured and transmitted across numerous social media platforms nowadays, and it has a significant influence on the real world. It is vital to create effective algorithms and tools for detecting misleading information on social media platforms. Most modern research approaches for identifying fraudulent information are based on machine learning, deep learning, feature engineering, graph mining, image and video analysis, and newly built datasets and online services. There is a pressing need to develop a viable approach for readily detecting misleading information. The deep learning LSTM and Bi-LSTM model was proposed as a method for detecting fake news, In this work. First, the NLTK toolkit was used to remove stop words, punctuation, and special characters from the text. The same toolset is used to tokenize and preprocess the text. Since then, GLOVE word embeddings have incorporated higher-level characteristics of the input text extracted from long-term relationships between word sequences captured by the RNN-LSTM, Bi-LSTM model to the preprocessed text. The proposed model additionally employs dropout technology with Dense layers to improve the model's efficiency. The proposed RNN Bi-LSTM-based technique obtains the best accuracy of 94%, and 93% using the Adam optimizer and the Binary cross-entropy loss function with Dropout (0.1,0.2), Once the Dropout range increases it decreases the accuracy of the model as it goes 92% once Dropout (0.3).
Advanced video compression is required due to the rise of online video content. A strong compression method can help convey video data effectively over a constrained bandwidth. We observed how more internet usage for video conferences, online gaming, and education led to decreased video quality from Netflix, YouTube, and other streaming services in Europe and other regions, particularly during the COVID-19 epidemic. They are represented in standard video compression algorithms as a succession of reference frames after residual frames, and these approaches are limited in their application. Deep learning's introduction and current advancements have the potential to overcome such problems. This study provides a deep learning-based video compression model that meets or exceeds current H.264 standards.
The current adversarial attacks against machine learning models can be divided into white-box attacks and black-box attacks. Further the black-box can be subdivided into soft label and hard label black-box, but the latter has the deficiency of only returning the class with the highest prediction probability, which leads to the difficulty in gradient estimation. However, due to its wide application, it is of great research significance and application value to explore hard label blackbox attacks. This paper proposes an Automatic Selection Attacks Framework (ASAF) for hard label black-box models, which can be explained in two aspects based on the existing attack methods. Firstly, ASAF applies model equivalence to select substitute models automatically so as to generate adversarial examples and then completes black-box attacks based on their transferability. Secondly, specified feature selection and parallel attack method are proposed to shorten the attack time and improve the attack success rate. The experimental results show that ASAF can achieve more than 90% success rate of nontargeted attack on the common models of traditional dataset ResNet-101 (CIFAR10) and InceptionV4 (ImageNet). Meanwhile, compared with FGSM and other attack algorithms, the attack time is reduced by at least 89.7% and 87.8% respectively in two traditional datasets. Besides, it can achieve 90% success rate of attack on the online model, BaiduAI digital recognition. In conclusion, ASAF is the first automatic selection attacks framework for hard label blackbox models, in which specified feature selection and parallel attack methods speed up automatic attacks.