Biblio
Social media has beneficial and detrimental impacts on social life. The vast distribution of false information on social media has become a worldwide threat. As a result, the Fake News Detection System in Social Networks has risen in popularity and is now considered an emerging research area. A centralized training technique makes it difficult to build a generalized model by adapting numerous data sources. In this study, we develop a decentralized Deep Learning model using Federated Learning (FL) for fake news detection. We utilize an ISOT fake news dataset gathered from "Reuters.com" (N = 44,898) to train the deep learning model. The performance of decentralized and centralized models is then assessed using accuracy, precision, recall, and F1-score measures. In addition, performance was measured by varying the number of FL clients. We identify the high accuracy of our proposed decentralized FL technique (accuracy, 99.6%) utilizing fewer communication rounds than in previous studies, even without employing pre-trained word embedding. The highest effects are obtained when we compare our model to three earlier research. Instead of a centralized method for false news detection, the FL technique may be used more efficiently. The use of Blockchain-like technologies can improve the integrity and validity of news sources.
ISSN: 2577-1647
While the existence of many security elements in software can be measured (e.g., vulnerabilities, security controls, or privacy controls), it is challenging to measure their relative security impact. In the physical world we can often measure the impact of individual elements to a system. However, in cyber security we often lack ground truth (i.e., the ability to directly measure significance). In this work we propose to solve this by leveraging human expert opinion to provide ground truth. Experts are iteratively asked to compare pairs of security elements to determine their relative significance. On the back end our knowledge encoding tool performs a form of binary insertion sort on a set of security elements using each expert as an oracle for the element comparisons. The tool not only sorts the elements (note that equality may be permitted), but it also records the strength or degree of each relationship. The output is a directed acyclic ‘constraint’ graph that provides a total ordering among the sets of equivalent elements. Multiple constraint graphs are then unified together to form a single graph that is used to generate a scoring or prioritization system.For our empirical study, we apply this domain-agnostic measurement approach to generate scoring/prioritization systems in the areas of vulnerability scoring, privacy control prioritization, and cyber security control evaluation.
The cutting-edge biometric recognition systems extract distinctive feature vectors of biometric samples using deep neural networks to measure the amount of (dis-)similarity between two biometric samples. Studies have shown that personal information (e.g., health condition, ethnicity, etc.) can be inferred, and biometric samples can be reconstructed from those feature vectors, making their protection an urgent necessity. State-of-the-art biometrics protection solutions are based on homomorphic encryption (HE) to perform recognition over encrypted feature vectors, hiding the features and their processing while releasing the outcome only. However, this comes at the cost of those solutions' efficiency due to the inefficiency of HE-based solutions with a large number of multiplications; for (dis-)similarity measures, this number is proportional to the vector's dimension. In this paper, we tackle the HE performance bottleneck by freeing the two common (dis-)similarity measures, the cosine similarity and the squared Euclidean distance, from multiplications. Assuming normalized feature vectors, our approach pre-computes and organizes those (dis-)similarity measures into lookup tables. This transforms their computation into simple table-lookups and summation only. We study quantization parameters for the values in the lookup tables and evaluate performances on both synthetic and facial feature vectors for which we achieve a recognition performance identical to the non-tabularized baseline systems. We then assess their efficiency under HE and record runtimes between 28.95ms and 59.35ms for the three security levels, demonstrating their enhanced speed.
ISSN: 2474-9699