Wu, Xiaohe, Xu, Jianbo, Huang, Weihong, Jian, Wei.
2020.
A new mutual authentication and key agreement protocol in wireless body area network. 2020 IEEE International Conference on Smart Cloud (SmartCloud). :199—203.
Due to the mobility and openness of wireless body area networks (WBANs), the security of WBAN has been questioned by people. The patient's physiological information in WBAN is sensitive and confidential, which requires full consideration of user anonymity, untraceability, and data privacy protection in key agreement. Aiming at the shortcomings of Li et al.'s protocol in terms of anonymity and session unlinkability, forward/backward confidentiality, etc., a new anonymous mutual authentication and key agreement protocol was proposed on the basis of the protocol. This scheme only uses XOR and the one-way hash operations, which not only reduces communication consumption but also ensures security, and realizes a truly lightweight anonymous mutual authentication and key agreement protocol.
Wu, Xiaohe, Calderon, Juan, Obeng, Morrison.
2021.
Attribution Based Approach for Adversarial Example Generation. SoutheastCon 2021. :1–6.
Neural networks with deep architectures have been used to construct state-of-the-art classifiers that can match human level accuracy in areas such as image classification. However, many of these classifiers can be fooled by examples slightly modified from their original forms. In this work, we propose a novel approach for generating adversarial examples that makes use of only attribution information of the features and perturbs only features that are highly influential to the output of the classifier. We call this approach Attribution Based Adversarial Generation (ABAG). To demonstrate the effectiveness of this approach, three somewhat arbitrary algorithms are proposed and examined. In the first algorithm all non-zero attributions are utilized and associated features perturbed; in the second algorithm only the top-n most positive and top-n most negative attributions are used and corresponding features perturbed; and in the third algorithm the level of perturbation is increased in an iterative manner until an adversarial example is discovered. All of the three algorithms are implemented and experiments are performed on the well-known MNIST dataset. Experiment results show that adversarial examples can be generated very efficiently, and thus prove the validity and efficacy of ABAG - utilizing attributions for the generation of adversarial examples. Furthermore, as shown by examples, ABAG can be adapted to provides a systematic searching approach to generate adversarial examples by perturbing a minimum amount of features.