Visible to the public Biblio

Filters: Keyword is Web 2.0  [Clear All Filters]
2023-08-24
Sun, Chuang, Cao, Junwei, Huo, Ru, Du, Lei, Cheng, Xiangfeng.  2022.  Metaverse Applications in Energy Internet. 2022 IEEE International Conference on Energy Internet (ICEI). :7–12.
With the increasing number of distributed energy sources and the growing demand for free exchange of energy, Energy internet (EI) is confronted with great challenges of persistent connection, stable transmission, real-time interaction, and security. The new definition of metaverse in the EI field is proposed as a potential solution for these challenges by establishing a massive and comprehensive fusion 3D network, which can be considered as the advanced stage of EI. The main characteristics of the metaverse such as reality to virtualization, interaction, persistence, and immersion are introduced. Specifically, we present the key enabling technologies of the metaverse including virtual reality, artificial intelligence, blockchain, and digital twin. Meanwhile, the potential applications are presented from the perspectives of immersive user experience, virtual power station, management, energy trading, new business, device maintenance. Finally, some challenges of metaverse in EI are concluded.
2023-06-02
Abdellatif, Tamer Mohamed, Said, Raed A., Ghazal, Taher M..  2022.  Understanding Dark Web: A Systematic Literature Review. 2022 International Conference on Cyber Resilience (ICCR). :1—10.

Web evolution and Web 2.0 social media tools facilitate communication and support the online economy. On the other hand, these tools are actively used by extremist, terrorist and criminal groups. These malicious groups use these new communication channels, such as forums, blogs and social networks, to spread their ideologies, recruit new members, market their malicious goods and raise their funds. They rely on anonymous communication methods that are provided by the new Web. This malicious part of the web is called the “dark web”. Dark web analysis became an active research area in the last few decades, and multiple research studies were conducted in order to understand our enemy and plan for counteract. We have conducted a systematic literature review to identify the state-of-art and open research areas in dark web analysis. We have filtered the available research papers in order to obtain the most relevant work. This filtration yielded 28 studies out of 370. Our systematic review is based on four main factors: the research trends used to analyze dark web, the employed analysis techniques, the analyzed artifacts, and the accuracy and confidence of the available work. Our review results have shown that most of the dark web research relies on content analysis. Also, the results have shown that forum threads are the most analyzed artifacts. Also, the most significant observation is the lack of applying any accuracy metrics or validation techniques by most of the relevant studies. As a result, researchers are advised to consider using acceptance metrics and validation techniques in their future work in order to guarantee the confidence of their study results. In addition, our review has identified some open research areas in dark web analysis which can be considered for future research work.

2020-05-18
Panahandeh, Mahnaz, Ghanbari, Shirin.  2019.  Correction of Spaces in Persian Sentences for Tokenization. 2019 5th Conference on Knowledge Based Engineering and Innovation (KBEI). :670–674.
The exponential growth of the Internet and its users and the emergence of Web 2.0 have caused a large volume of textual data to be created. Automatic analysis of such data can be used in making decisions. As online text is created by different producers with different styles of writing, pre-processing is a necessity prior to any processes related to natural language tasks. An essential part of textual preprocessing prior to the recognition of the word vocabulary is normalization, which includes the correction of spaces that particularly in the Persian language this includes both full-spaces between words and half-spaces. Through the review of user comments within social media services, it can be seen that in many cases users do not adhere to grammatical rules of inserting both forms of spaces, which increases the complexity of the identification of words and henceforth, reducing the accuracy of further processing on the text. In this study, current issues in the normalization and tokenization of preprocessing tools within the Persian language and essentially identifying and correcting the separation of words are and the correction of spaces are proposed. The results obtained and compared to leading preprocessing tools highlight the significance of the proposed methodology.
2015-05-06
Sayed, B., Traore, I..  2014.  Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :261-268.

The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.

Goseva-Popstojanova, K., Dimitrijevikj, A..  2014.  Distinguishing between Web Attacks and Vulnerability Scans Based on Behavioral Characteristics. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :42-48.

The number of vulnerabilities and reported attacks on Web systems are showing increasing trends, which clearly illustrate the need for better understanding of malicious cyber activities. In this paper we use clustering to classify attacker activities aimed at Web systems. The empirical analysis is based on four datasets, each in duration of several months, collected by high-interaction honey pots. The results show that behavioral clustering analysis can be used to distinguish between attack sessions and vulnerability scan sessions. However, the performance heavily depends on the dataset. Furthermore, the results show that attacks differ from vulnerability scans in a small number of features (i.e., session characteristics). Specifically, for each dataset, the best feature selection method (in terms of the high probability of detection and low probability of false alarm) selects only three features and results into three to four clusters, significantly improving the performance of clustering compared to the case when all features are used. The best subset of features and the extent of the improvement, however, also depend on the dataset.

Sayed, B., Traore, I..  2014.  Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :261-268.

The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.

2015-05-05
Sayed, B., Traore, I..  2014.  Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control. Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on. :261-268.

The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.