Biblio
This paper presents a secure reinforcement learning (RL) based control method for unknown linear time-invariant cyber-physical systems (CPSs) that are subjected to compositional attacks such as eavesdropping and covert attack. We consider the attack scenario where the attacker learns about the dynamic model during the exploration phase of the learning conducted by the designer to learn a linear quadratic regulator (LQR), and thereafter, use such information to conduct a covert attack on the dynamic system, which we refer to as doubly learning-based control and attack (DLCA) framework. We propose a dynamic camouflaging based attack-resilient reinforcement learning (ARRL) algorithm which can learn the desired optimal controller for the dynamic system, and at the same time, can inject sufficient misinformation in the estimation of system dynamics by the attacker. The algorithm is accompanied by theoretical guarantees and extensive numerical experiments on a consensus multi-agent system and on a benchmark power grid model.
Swarm intelligence, a nature-inspired concept that includes multiplicity, stochasticity, randomness, and messiness is emergent in most real-life problem-solving. The concept of swarming can be integrated with herding predators in an ecological system. This paper presents the development of stabilizing velocity-based controllers for a Lagrangian swarm of \$nın \textbackslashtextbackslashmathbbN\$ individuals, which are supposed to capture a moving target (intruder). The controllers are developed from a Lyapunov function, total potentials, designed via Lyapunov-based control scheme (LbCS) falling under the classical approach of artificial potential fields method. The interplay of the three central pillars of LbCS, which are safety, shortness, and smoothest course for motion planning, results in cost and time effectiveness and efficiency of velocity controllers. Computer simulations illustrate the effectiveness of control laws.