Visible to the public Biblio

Filters: Keyword is launch attacks  [Clear All Filters]
2021-04-08
Yaseen, Q., Panda, B..  2012.  Tackling Insider Threat in Cloud Relational Databases. 2012 IEEE Fifth International Conference on Utility and Cloud Computing. :215—218.
Cloud security is one of the major issues that worry individuals and organizations about cloud computing. Therefore, defending cloud systems against attacks such asinsiders' attacks has become a key demand. This paper investigates insider threat in cloud relational database systems(cloud RDMS). It discusses some vulnerabilities in cloud computing structures that may enable insiders to launch attacks, and shows how load balancing across multiple availability zones may facilitate insider threat. To prevent such a threat, the paper suggests three models, which are Peer-to-Peer model, Centralized model and Mobile-Knowledgebase model, and addresses the conditions under which they work well.
2020-09-18
Ling, Mee Hong, Yau, Kok-Lim Alvin.  2019.  Can Reinforcement Learning Address Security Issues? an Investigation into a Clustering Scheme in Distributed Cognitive Radio Networks 2019 International Conference on Information Networking (ICOIN). :296—300.

This paper investigates the effectiveness of reinforcement learning (RL) model in clustering as an approach to achieve higher network scalability in distributed cognitive radio networks. Specifically, it analyzes the effects of RL parameters, namely the learning rate and discount factor in a volatile environment, which consists of member nodes (or secondary users) that launch attacks with various probabilities of attack. The clusterhead, which resides in an operating region (environment) that is characterized by the probability of attacks, countermeasures the malicious SUs by leveraging on a RL model. Simulation results have shown that in a volatile operating environment, the RL model with learning rate α= 1 provides the highest network scalability when the probability of attacks ranges between 0.3 and 0.7, while the discount factor γ does not play a significant role in learning in an operating environment that is volatile due to attacks.