Visible to the public Biblio

Filters: Keyword is Mitigation strategy  [Clear All Filters]
2022-03-23
Chandavarkar, B. R., Shantanu, T K.  2021.  Sybil Attack Simulation and Mitigation in UnetStack. 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT). :01—07.

Underwater networks have the potential to enable unexplored applications and to enhance our ability to observe and predict the ocean. Underwater acoustic sensor networks (UASNs) are often deployed in unprecedented and hostile waters and face many security threats. Applications based on UASNs such as coastal defense, pollution monitoring, assisted navigation to name a few, require secure communication. A new set of communication protocols and cooperative coordination algorithms have been proposed to enable collaborative monitoring tasks. However, such protocols overlook security as a key performance indicator. Spoofing, altering, or replaying routing information can affect the entire network, making UASN vulnerable to routing attacks such as selective forwarding, sinkhole attack, Sybil attack, acknowledgement spoofing and HELLO flood attack. The lack of security against such threats is startling if maintained that security is indeed an important requirement in many emerging civilian and military applications. In this work, we look at one of the most prevalent attacks among UASNs which is Sybill attack and discuss mitigation approaches for it. Then, feasibly implemented the attack in UnetStack3 to simulate real-life scenario.

2021-06-28
Wei, Wenqi, Liu, Ling, Loper, Margaret, Chow, Ka-Ho, Gursoy, Mehmet Emre, Truex, Stacey, Wu, Yanzhao.  2020.  Adversarial Deception in Deep Learning: Analysis and Mitigation. 2020 Second IEEE International Conference on Trust, Privacy and Security in Intelligent Systems and Applications (TPS-ISA). :236–245.
The burgeoning success of deep learning has raised the security and privacy concerns as more and more tasks are accompanied with sensitive data. Adversarial attacks in deep learning have emerged as one of the dominating security threats to a range of mission-critical deep learning systems and applications. This paper takes a holistic view to characterize the adversarial examples in deep learning by studying their adverse effect and presents an attack-independent countermeasure with three original contributions. First, we provide a general formulation of adversarial examples and elaborate on the basic principle for adversarial attack algorithm design. Then, we evaluate 15 adversarial attacks with a variety of evaluation metrics to study their adverse effects and costs. We further conduct three case studies to analyze the effectiveness of adversarial examples and to demonstrate their divergence across attack instances. We take advantage of the instance-level divergence of adversarial examples and propose strategic input transformation teaming defense. The proposed defense methodology is attack-independent and capable of auto-repairing and auto-verifying the prediction decision made on the adversarial input. We show that the strategic input transformation teaming defense can achieve high defense success rates and are more robust with high attack prevention success rates and low benign false-positive rates, compared to existing representative defense methods.
2018-06-11
Zhang, X., Li, R., Zhao, W., Wu, R..  2017.  Detection of malicious nodes in NDN VANET for Interest Packet Popple Broadcast Diffusion Attack. 2017 11th IEEE International Conference on Anti-counterfeiting, Security, and Identification (ASID). :114–118.

As one of the next generation network architectures, Named Data Networking(NDN) which features location-independent addressing and content caching makes it more suitable to be deployed into Vehicular Ad-hoc Network(VANET). However, a new attack pattern is found when NDN and VANET combine. This new attack is Interest Packet Popple Broadcast Diffusion Attack (PBDA). There is no mitigation strategies to mitigate PBDA. In this paper a mitigation strategies called RVMS based on node reputation value (RV) is proposed to detect malicious nodes. The node calculates the neighbor node RV by direct and indirect RV evaluation and uses Markov chain predict the current RV state of the neighbor node according to its historical RV. The RV state is used to decide whether to discard the interest packet. Finally, the effectiveness of the RVMS is verified through modeling and experiment. The experimental results show that the RVMS can mitigate PBDA.