Biblio
Social media has beneficial and detrimental impacts on social life. The vast distribution of false information on social media has become a worldwide threat. As a result, the Fake News Detection System in Social Networks has risen in popularity and is now considered an emerging research area. A centralized training technique makes it difficult to build a generalized model by adapting numerous data sources. In this study, we develop a decentralized Deep Learning model using Federated Learning (FL) for fake news detection. We utilize an ISOT fake news dataset gathered from "Reuters.com" (N = 44,898) to train the deep learning model. The performance of decentralized and centralized models is then assessed using accuracy, precision, recall, and F1-score measures. In addition, performance was measured by varying the number of FL clients. We identify the high accuracy of our proposed decentralized FL technique (accuracy, 99.6%) utilizing fewer communication rounds than in previous studies, even without employing pre-trained word embedding. The highest effects are obtained when we compare our model to three earlier research. Instead of a centralized method for false news detection, the FL technique may be used more efficiently. The use of Blockchain-like technologies can improve the integrity and validity of news sources.
ISSN: 2577-1647
While the existence of many security elements in software can be measured (e.g., vulnerabilities, security controls, or privacy controls), it is challenging to measure their relative security impact. In the physical world we can often measure the impact of individual elements to a system. However, in cyber security we often lack ground truth (i.e., the ability to directly measure significance). In this work we propose to solve this by leveraging human expert opinion to provide ground truth. Experts are iteratively asked to compare pairs of security elements to determine their relative significance. On the back end our knowledge encoding tool performs a form of binary insertion sort on a set of security elements using each expert as an oracle for the element comparisons. The tool not only sorts the elements (note that equality may be permitted), but it also records the strength or degree of each relationship. The output is a directed acyclic ‘constraint’ graph that provides a total ordering among the sets of equivalent elements. Multiple constraint graphs are then unified together to form a single graph that is used to generate a scoring or prioritization system.For our empirical study, we apply this domain-agnostic measurement approach to generate scoring/prioritization systems in the areas of vulnerability scoring, privacy control prioritization, and cyber security control evaluation.
Web evolution and Web 2.0 social media tools facilitate communication and support the online economy. On the other hand, these tools are actively used by extremist, terrorist and criminal groups. These malicious groups use these new communication channels, such as forums, blogs and social networks, to spread their ideologies, recruit new members, market their malicious goods and raise their funds. They rely on anonymous communication methods that are provided by the new Web. This malicious part of the web is called the “dark web”. Dark web analysis became an active research area in the last few decades, and multiple research studies were conducted in order to understand our enemy and plan for counteract. We have conducted a systematic literature review to identify the state-of-art and open research areas in dark web analysis. We have filtered the available research papers in order to obtain the most relevant work. This filtration yielded 28 studies out of 370. Our systematic review is based on four main factors: the research trends used to analyze dark web, the employed analysis techniques, the analyzed artifacts, and the accuracy and confidence of the available work. Our review results have shown that most of the dark web research relies on content analysis. Also, the results have shown that forum threads are the most analyzed artifacts. Also, the most significant observation is the lack of applying any accuracy metrics or validation techniques by most of the relevant studies. As a result, researchers are advised to consider using acceptance metrics and validation techniques in their future work in order to guarantee the confidence of their study results. In addition, our review has identified some open research areas in dark web analysis which can be considered for future research work.
In construction machinery, connectivity delivers higher advantages in terms of higher productivity, lower costs, and most importantly safer work environment. As the machinery grows more dependent on internet-connected technologies, data security and product cybersecurity become more critical than ever. These machines have more cyber risks compared to other automotive segments since there are more complexities in software, larger after-market options, use more standardized SAE J1939 protocol, and connectivity through long-distance wireless communication channels (LTE interfaces for fleet management systems). Construction machinery also operates throughout the day, which means connected and monitored endlessly. Till today, construction machinery manufacturers are investigating the product cybersecurity challenges in threat monitoring, security testing, and establishing security governance and policies. There are limited security testing methodologies on SAE J1939 CAN protocols. There are several testing frameworks proposed for fuzz testing CAN networks according to [1]. This paper proposes security testing methods (Fuzzing, Pen testing) for in-vehicle communication protocols in construction machinery.