Title | Artificial Conversational Agent using Robust Adversarial Reinforcement Learning |
Publication Type | Conference Paper |
Year of Publication | 2021 |
Authors | Wadekar, Isha |
Conference Name | 2021 International Conference on Computer Communication and Informatics (ICCCI) |
Date Published | jan |
Keywords | Computational modeling, conversational agent, conversational agents, encoding, Human Behavior, Long Short Term Memory (LSTM), Manuals, Metrics, Neural networks, pubcrawl, reinforcement learning, Scalability, Seq2Seq Model, Stochastic processes, Tools |
Abstract | Reinforcement learning (R.L.) is an effective and practical means for resolving problems where the broker possesses no information or knowledge about the environment. The agent acquires knowledge that is conditioned on two components: trial-and-error and rewards. An R.L. agent determines an effective approach by interacting directly with the setting and acquiring information regarding the circumstances. However, many modern R.L.-based strategies neglect to theorise considering there is an enormous rift within the simulation and the physical world due to which policy-learning tactics displease that stretches from simulation to physical world Even if design learning is achieved in the physical world, the knowledge inadequacy leads to failed generalization policies from suiting to test circumstances. The intention of robust adversarial reinforcement learning(RARL) is where an agent is instructed to perform in the presence of a destabilizing opponent(adversary agent) that connects impedance to the system. The combined trained adversary is reinforced so that the actual agent i.e. the protagonist is equipped rigorously. |
DOI | 10.1109/ICCCI50826.2021.9402336 |
Citation Key | wadekar_artificial_2021 |