Visible to the public Artificial Conversational Agent using Robust Adversarial Reinforcement Learning

TitleArtificial Conversational Agent using Robust Adversarial Reinforcement Learning
Publication TypeConference Paper
Year of Publication2021
AuthorsWadekar, Isha
Conference Name2021 International Conference on Computer Communication and Informatics (ICCCI)
Date Publishedjan
KeywordsComputational modeling, conversational agent, conversational agents, encoding, Human Behavior, Long Short Term Memory (LSTM), Manuals, Metrics, Neural networks, pubcrawl, reinforcement learning, Scalability, Seq2Seq Model, Stochastic processes, Tools
AbstractReinforcement learning (R.L.) is an effective and practical means for resolving problems where the broker possesses no information or knowledge about the environment. The agent acquires knowledge that is conditioned on two components: trial-and-error and rewards. An R.L. agent determines an effective approach by interacting directly with the setting and acquiring information regarding the circumstances. However, many modern R.L.-based strategies neglect to theorise considering there is an enormous rift within the simulation and the physical world due to which policy-learning tactics displease that stretches from simulation to physical world Even if design learning is achieved in the physical world, the knowledge inadequacy leads to failed generalization policies from suiting to test circumstances. The intention of robust adversarial reinforcement learning(RARL) is where an agent is instructed to perform in the presence of a destabilizing opponent(adversary agent) that connects impedance to the system. The combined trained adversary is reinforced so that the actual agent i.e. the protagonist is equipped rigorously.
DOI10.1109/ICCCI50826.2021.9402336
Citation Keywadekar_artificial_2021