Biblio

Filters: Author is Bai, X.  [Clear All Filters]
2020-11-02
Ma, Y., Bai, X..  2019.  Comparison of Location Privacy Protection Schemes in VANETs. 2019 12th International Symposium on Computational Intelligence and Design (ISCID). 2:79–83.
Vehicular Ad-hoc Networks (VANETs) is a traditional mobile ad hoc network (MANET) used on traffic roads and it is a special mobile ad hoc network. As an intelligent transportation system, VANETs can solve driving safety and provide value-added services. Therefore, the application of VANETs can improve the safety and efficiency of road traffic. Location services are in a crucial position for the development of VANETs. VANETs has the characteristics of open access and wireless communication. Malicious node attacks may lead to the leakage of user privacy in VANETs, thus seriously affecting the use of VANETs. Therefore, the location privacy issue of VANETs cannot be ignored. This paper classifies the attack methods in VANETs, and summarizes and compares the location privacy protection techniques proposed in the existing research.
2019-01-16
Bai, X., Niu, W., Liu, J., Gao, X., Xiang, Y., Liu, J..  2018.  Adversarial Examples Construction Towards White-Box Q Table Variation in DQN Pathfinding Training. 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC). :781–787.

As a new research hotspot in the field of artificial intelligence, deep reinforcement learning (DRL) has achieved certain success in various fields such as robot control, computer vision, natural language processing and so on. At the same time, the possibility of its application being attacked and whether it have a strong resistance to strike has also become a hot topic in recent years. Therefore, we select the representative Deep Q Network (DQN) algorithm in deep reinforcement learning, and use the robotic automatic pathfinding application as a countermeasure application scenario for the first time, and attack DQN algorithm against the vulnerability of the adversarial samples. In this paper, we first use DQN to find the optimal path, and analyze the rules of DQN pathfinding. Then, we propose a method that can effectively find vulnerable points towards White-Box Q table variation in DQN pathfinding training. Finally, we build a simulation environment as a basic experimental platform to test our method, through multiple experiments, we can successfully find the adversarial examples and the experimental results show that the supervised method we proposed is effective.