Visible to the public Biblio

Filters: Keyword is DQN  [Clear All Filters]
2022-08-26
Rajamalli Keerthana, R, Fathima, G, Florence, Lilly.  2021.  Evaluating the Performance of Various Deep Reinforcement Learning Algorithms for a Conversational Chatbot. 2021 2nd International Conference for Emerging Technology (INCET). :1–8.
Conversational agents are the most popular AI technology in IT trends. Domain specific chatbots are now used by almost every industry in order to upgrade their customer service. The Proposed paper shows the modelling and performance of one such conversational agent created using deep learning. The proposed model utilizes NMT (Neural Machine Translation) from the TensorFlow software libraries. A BiRNN (Bidirectional Recurrent Neural Network) is used in order to process input sentences that contain large number of tokens (20-40 words). In order to understand the context of the input sentence attention model is used along with BiRNN. The conversational models usually have one drawback, that is, they sometimes provide irrelevant answer to the input. This happens quite often in conversational chatbots as the chatbot doesn't realize that it is answering without context. This drawback is solved in the proposed system using Deep Reinforcement Learning technique. Deep reinforcement Learning follows a reward system that enables the bot to differentiate between right and wrong answers. Deep Reinforcement Learning techniques allows the chatbot to understand the sentiment of the query and reply accordingly. The Deep Reinforcement Learning algorithms used in the proposed system is Q-Learning, Deep Q Neural Network (DQN) and Distributional Reinforcement Learning with Quantile Regression (QR-DQN). The performance of each algorithm is evaluated and compared in this paper in order to find the best DRL algorithm. The dataset used in the proposed system is Cornell Movie-dialogs corpus and CoQA (A Conversational Question Answering Challenge). CoQA is a large dataset that contains data collected from 8000+ conversations in the form of questions and answers. The main goal of the proposed work is to increase the relevancy of the chatbot responses and to increase the perplexity of the conversational chatbot.
2020-04-13
Kim, Dongchil, Kim, Kyoungman, Park, Sungjoo.  2019.  Automatic PTZ Camera Control Based on Deep-Q Network in Video Surveillance System. 2019 International Conference on Electronics, Information, and Communication (ICEIC). :1–3.
Recently, Pan/Tilt/Zoom (PTZ) camera has been widely used in video surveillance systems. However, it is difficult to automatically control PTZ cameras according to moving objects in the surveillance area. This paper proposes an automatic camera control method based on a Deep-Q Network (DQN) for improving the recognition accuracy of anomaly actions in the video surveillance system. To generate PTZ camera control values, the proposed method uses the position and size information of the object which received from the video analysis system. Through implementation results, the proposed method can automatically control the PTZ camera according to moving objects.
2019-01-16
Bai, X., Niu, W., Liu, J., Gao, X., Xiang, Y., Liu, J..  2018.  Adversarial Examples Construction Towards White-Box Q Table Variation in DQN Pathfinding Training. 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC). :781–787.

As a new research hotspot in the field of artificial intelligence, deep reinforcement learning (DRL) has achieved certain success in various fields such as robot control, computer vision, natural language processing and so on. At the same time, the possibility of its application being attacked and whether it have a strong resistance to strike has also become a hot topic in recent years. Therefore, we select the representative Deep Q Network (DQN) algorithm in deep reinforcement learning, and use the robotic automatic pathfinding application as a countermeasure application scenario for the first time, and attack DQN algorithm against the vulnerability of the adversarial samples. In this paper, we first use DQN to find the optimal path, and analyze the rules of DQN pathfinding. Then, we propose a method that can effectively find vulnerable points towards White-Box Q table variation in DQN pathfinding training. Finally, we build a simulation environment as a basic experimental platform to test our method, through multiple experiments, we can successfully find the adversarial examples and the experimental results show that the supervised method we proposed is effective.