Biblio
In this paper, we propose a convex programming based method to address a long-standing problem of inner-approximating backward reachable sets of state-constrained polynomial systems subject to time-varying uncertainties. The backward reachable set is a set of states, from which all trajectories starting will surely enter a target region at the end of a given time horizon without violating a set of state constraints in spite of the actions of uncertainties. It is equal to the zero sublevel set of the unique Lipschitz viscosity solution to a Hamilton-Jacobi partial differential equation (HJE). We show that inner approximations of the backward reachable set can be formed by zero sublevel sets of its viscosity supersolutions. Consequently, we reduce the inner-approximation problem to a problem of synthesizing polynomial viscosity supersolutions to this HJE. Such a polynomial solution in our method is synthesized by solving a single semidefinite program. We also prove that polynomial solutions to the formulated semidefinite program exist and can produce a convergent sequence of inner approximations to the interior of the backward reachable set in measure under appropriate assumptions. This is the main contribution of this paper. Several illustrative examples demonstrate the merits of our approach.
The possible interactions between a controller and its environment can naturally be modelled as the arena of a two-player game, and adding an appropriate winning condition permits to specify desirable behavior. The classical model here is the positional game, where both players can (fully or partially) observe the current position in the game graph, which in turn is indicative of their mutual current states. In practice, neither sensing and actuating the environment through physical devices nor data forwarding to and from the controller and signal processing in the controller are instantaneous. The resultant delays force the controller to draw decisions before being aware of the recent history of a play and to submit these decisions well before they can take effect asynchronously. It is known that existence of a winning strategy for the controller in games with such delays is decidable over finite game graphs and with respect to ω-regular objectives. The underlying reduction, however, is impractical for non-trivial delays as it incurs a blow-up of the game graph which is exponential in the magnitude of the delay. For safety objectives, we propose a more practical incremental algorithm successively synthesizing a series of controllers handling increasing delays and reducing the game-graph size in between. It is demonstrated using benchmark examples that even a simplistic explicit-state implementation of this algorithm outperforms state-of-the-art symbolic synthesis algorithms as soon as non-trivial delays have to be handled. We furthermore address the practically relevant cases of non-order-preserving delays and bounded message loss, as arising in actual networked control, thereby considerably extending the scope of regular game theory under delay.
In this paper we study the problem of computing robust invariant sets for state-constrained perturbed polynomial systems within the Hamilton-Jacobi reachability framework. A robust invariant set is a set of states such that every possible trajectory starting from it never violates the given state constraint, irrespective of the actual perturbation. The main contribution of this work is to describe the maximal robust invariant set as the zero level set of the unique Lipschitz-continuous viscosity solution to a Hamilton-Jacobi-Bellman (HJB) equation. The continuity and uniqueness property of the viscosity solution facilitates the use of existing numerical methods to solve the HJB equation for an appropriate number of state variables in order to obtain an approximation of the maximal robust invariant set. We furthermore propose a method based on semi-definite programming to synthesize robust invariant sets. Some illustrative examples demonstrate the performance of our methods.
In this paper we present a method based on linear programming that facilitates reliable safety verification of hybrid dynamical systems over the infinite time horizon subject to perturbation inputs. The verification algorithm applies the probably approximately correct (PAC) learning framework and consequently can be regarded as statistically formal verification in the sense that it provides formal safety guarantees expressed using error probabilities and confidences. The safety of hybrid systems in this framework is verified via the computation of so-called PAC barrier certificates, which can be computed by solving a linear programming problem. Based on scenario approaches, the linear program is constructed by a family of independent and identically distributed state samples. In this way we can conduct verification of hybrid dynamical systems that existing methods are not capable of dealing with. Some preliminary experiments demonstrate the performance of our approach.
Delayed coupling between state variables occurs regularly in technical dynamic systems, especially embedded control. As it consequently is omnipresent in safety-critical domains, there is an increasing interest in the safety verifications of systems modeled by Delay Differential Equations (DDEs). In this paper, we leverage qualitative guarantees for the existence of an exponentially decreasing estimation on the solutions to DDEs as established in classical stability theory, and present a quantitative method for constructing such delay-dependent estimations, thereby facilitating a reduction of the verification problem over an unbounded temporal horizon to a bounded one. Our technique builds on the linearization technique of non-linear dynamics and spectral analysis of the linearized counterparts. We show experimentally on a set of representative benchmarks from the literature that our technique indeed extends the scope of bounded verification techniques to unbounded verification tasks. Moreover our technique is easy to implement and can be combined with any automatic tool dedicated to bounded verification of DDEs.
The possible interactions between a controller and its environment can naturally be modelled as the arena of a two-player game, and adding an appropriate winning condition permits to specify desirable behavior. The classical model here is the positional game, where both players can (fully or partially) observe the current position in the game graph, which in turn is indicative of their mutual current states. In practice, neither sensing or actuating the environment through physical devices nor data forwarding to and signal processing in the controller are instantaneous. The resultant delays force the controller to draw decisions before being aware of the recent history of a play. It is known that existence of a winning strategy for the controller in games with such delays is decidable over finite game graphs and with respect to ω-regular objectives. The underlying reduction, however, is impractical for non-trivial delays as it incurs a blow-up of the game graph which is exponential in the magnitude of the delay. For safety objectives, we propose a more practical incremental algorithm synthesizing a series of controllers handling increasing delays and reducing game-graph size in between. It is demonstrated using benchmark examples that even a simplistic explicit-state implementation of this algorithm outperforms state-of-the-art symbolic synthesis algorithms as soon as non-trivial delays have to be handled. We furthermore shed some light on the practically relevant case of non-order-preserving delays, as arising in actual networked control, thereby considerably extending the scope of regular game theory under delay pioneered by Klein and Zimmermann.