Biblio
Web evolution and Web 2.0 social media tools facilitate communication and support the online economy. On the other hand, these tools are actively used by extremist, terrorist and criminal groups. These malicious groups use these new communication channels, such as forums, blogs and social networks, to spread their ideologies, recruit new members, market their malicious goods and raise their funds. They rely on anonymous communication methods that are provided by the new Web. This malicious part of the web is called the “dark web”. Dark web analysis became an active research area in the last few decades, and multiple research studies were conducted in order to understand our enemy and plan for counteract. We have conducted a systematic literature review to identify the state-of-art and open research areas in dark web analysis. We have filtered the available research papers in order to obtain the most relevant work. This filtration yielded 28 studies out of 370. Our systematic review is based on four main factors: the research trends used to analyze dark web, the employed analysis techniques, the analyzed artifacts, and the accuracy and confidence of the available work. Our review results have shown that most of the dark web research relies on content analysis. Also, the results have shown that forum threads are the most analyzed artifacts. Also, the most significant observation is the lack of applying any accuracy metrics or validation techniques by most of the relevant studies. As a result, researchers are advised to consider using acceptance metrics and validation techniques in their future work in order to guarantee the confidence of their study results. In addition, our review has identified some open research areas in dark web analysis which can be considered for future research work.
Java locking is an essential functionality and tool in the development of applications and systems, and this is mainly because several modules may run in a synchronized way inside an application and these modules need a good coordination manner in order for them to run properly and in order to make the whole application or system stable and normal. As such, this paper focuses on comparing various Java locking mechanisms in order to achieve a better understanding of how these locks work and how to conduct a proper locking mechanism. The comparison of locks is made according to CPU usage, memory consumption, and ease of implementation indicators, with the aim of providing guidance to developers in choosing locks for different scenarios. For example, if the Pessimistic Locks are used in any program execution environment, i.e., whenever a thread obtains resources, it needs to obtain the lock first, which can ensure a certain level of data security. However, it will bring great CPU overhead and reduce efficiency. Also, different locks have different memory consumption, and developers are sometimes faced with the need to choose locks rationally with limited memory, or they will cause a series of memory problems. In particular, the comparison of Java locks is able to lead to a systematic classification of these locks and can help improve the understanding of the taxonomy logic of the Java locks.
A new framework is presented in this paper for proving coding theorems for linear codes, where the systematic bits and the corresponding parity-check bits play different roles. Precisely, the noisy systematic bits are used to limit the list size of typical codewords, while the noisy parity-check bits are used to select from the list the maximum likelihood codeword. This new framework for linear codes allows that the systematic bits and the parity-check bits are transmitted in different ways and over different channels. In particular, this new framework unifies the source coding theorems and the channel coding theorems. With this framework, we prove that the Bernoulli generator matrix codes (BGMCs) are capacity-achieving over binary-input output symmetric (BIOS) channels and also entropy-achieving for Bernoulli sources.
ISSN: 2157-8117