Biblio
Filters: Keyword is evolution strategies [Clear All Filters]
Distributed Black-Box optimization via Error Correcting Codes. 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :246—252.
.
2019. We introduce a novel distributed derivative-free optimization framework that is resilient to stragglers. The proposed method employs coded search directions at which the objective function is evaluated, and a decoding step to find the next iterate. Our framework can be seen as an extension of evolution strategies and structured exploration methods where structured search directions were utilized. As an application, we consider black-box adversarial attacks on deep convolutional neural networks. Our numerical experiments demonstrate a significant improvement in the computation times.
Learning How to Flock: Deriving Individual Behaviour from Collective Behaviour with Multi-agent Reinforcement Learning and Natural Evolution Strategies. Proceedings of the Genetic and Evolutionary Computation Conference Companion. :169–170.
.
2018. This work proposes a method for predicting the internal mechanisms of individual agents using observed collective behaviours by multi-agent reinforcement learning (MARL). Since the emergence of group behaviour among many agents can undergo phase transitions, and the action space will not in general be smooth, natural evolution strategies were adopted for updating a policy function. We tested the approach using a well-known flocking algorithm as a target model for our system to learn. With the data obtained from this rule-based model, the MARL model was trained, and its acquired behaviour was compared to the original. In the process, we discovered that agents trained by MARL can self-organize flow patterns using only local information. The expressed pattern is robust to changes in the initial positions of agents, whilst being sensitive to the training conditions used.