Visible to the public File preview

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs
Andr´ Platzer e
Carnegie Mellon University, Computer Science Department, Pittsburgh, PA, USA aplatzer@cs.cmu.edu

Abstract. Logic is a powerful tool for analyzing and verifying systems, including programs, discrete systems, real-time systems, hybrid systems, and distributed systems. Some applications also have a stochastic behavior, however, either because of fundamental properties of nature, uncertain environments, or simplifications to overcome complexity. Discrete probabilistic systems have been studied using logic. But logic has been chronically underdeveloped in the context of stochastic hybrid systems, i.e., systems with interacting discrete, continuous, and stochastic dynamics. We aim at overcoming this deficiency and introduce a dynamic logic for stochastic hybrid systems. Our results indicate that logic is a promising tool for understanding stochastic hybrid systems and can help taming some of their complexity. We introduce a compositional model for stochastic hybrid systems. We prove adaptivity, c`dl`g, and Markov a a time properties, and prove that the semantics of our logic is measurable. We present compositional proof rules, including rules for stochastic differential equations, and prove soundness.

1

Introduction

Logic has been used very successfully for verifying several classes of system models, including programs [24], discrete systems, real-time systems [4], hybrid systems [20], distributed systems, and distributed hybrid systems [21]. This gives us confidence in the power of logic. Not all aspects of real systems can be represented faithfully by these models, however. Some systems are inherently uncertain, either because of fundamental properties of nature, because they operate in an uncertain environment, or because deterministic models are simply too complex. Such systems have a stochastic dynamics. Nondeterministic overapproximations may be too inaccurate for a meaningful analysis, e.g., because a worst-case analysis would let bad environment actions happen always, which is very unlikely. Discrete probabilistic systems have been studied using logic. Yet, complex systems are driven by joint discrete, continuous, and stochastic dynamics. Logic has been chronically underdeveloped in the context of these stochastic hybrid systems.
This material is based upon work supported by the National Science Foundation by NSF CAREER Award CNS-1054246, NSF EXPEDITION CNS-0926181, CNS0931985, CNS-1035800, by ONR N00014-10-1-0188 and DARPA FA8650-10C-7077.

N. Bjørner and V. Sofronie-Stokkermans (Eds.): CADE 2011, LNAI 6803, pp. 431–445, 2011. c Springer-Verlag Berlin Heidelberg 2011

432

Andr´ Platzer e

Classical logic is about boolean truth and yes/no answers. That is why it is tricky to use logic for systems with stochastic effects. Logic has reached out into probabilistic extensions at least for discrete programs [13, 14, 6] and for first-order logic over a finite domain [25]. Logic has been used for the purpose of specifying system properties in model checking finite Markov chains [27] and probabilistic timed automata [17]. Stochastic hybrid systems, instead, are a domain where logic and especially proof calculi have so far been more conspicuous by their absence. Given how successful logic has been elsewhere, we want to change that. Stochastic hybrid systems [2, 3, 9] are systems with interacting discrete, continuous, and stochastic dynamics. There is not just one canonical way to add stochastic behavior to a system model. Stochasticity might be restricted to the discrete dynamics, as in piecewise deterministic Markov decision processes, restricted to the continuous and switching behavior as in switching diffusion processes [8], or allowed in many parts as in so-called General Stochastic Hybrid Systems; see [2, 3] for an overview. Several different forms of combinations of probabilities with hybrid systems and continuous systems have been considered, both for model checking [7, 12, 3] and for simulation-based validation [18, 28]. We develop a very different approach. We consider logic and theorem proving for stochastic hybrid systems1 to transfer the success that logic has had in other domains. Our approach is partially inspired by probabilistic PDL [14] and by barrier certificates for continuous dynamics [23]. We follow the arithmetical view that Kozen identified as suitable for probabilistic logic [14]. Classical analysis is provably inadequate [11] for analyzing even simple continuous stochastic processes. We heavily draw on both stochastic calculus and logic. It is not possible to present all mathematical background exhaustively here. But we provide basic definitions and intuition and refer to the literature for details and proofs of the main results of stochastic calculus [10, 19, 11]. Our most interesting contributions are: 1. We present the new model of stochastic hybrid programs (SHPs) and define a compositional semantics of SHP executions in terms of stochastic processes. 2. We prove that the semantic processes are adapted, have almost surely c`dl`g a a paths, and that their natural stopping times are Markov times. 3. We introduce a new logic called stochastic differential dynamic logic (SdL) for specifying and verifying properties of SHPs. 4. We define a semantics and prove that it is measurable such that probabilities are well-defined and probabilistic questions become meaningful. 5. We present proof rules for SdL and prove their soundness. 6. We identify the requirements for using Dynkin’s formula for proving properties using the infinitesimal generator of stochastic differential equations. SdL makes the rich semantical complexity and deep theory of stochastic hybrid systems accessible in a simple syntactic language. This makes the verification of stochastic hybrid systems possible with elementary syntactic proof principles.
1

Note that there is a model called Stochastic Hybrid Systems [9]. We do not mean this specific model in the narrow sense but refer to stochastic hybrid systems as the broader class of systems that share discrete, continuous, and stochastic dynamics.

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs

433

2

Preliminaries: Stochastic Processes

We fix a dimension d ∈ N for the Euclidean state space Rd equipped with its Borel σ-algebra B, i.e., the σ-algebra generated by all open subsets. A σalgebra on a set Ω is a nonempty set F ⊆ 2Ω that is closed under complement and countable union. We axiomatically fix a probability space (Ω, F, P ) with a σ-algebra F ⊆ 2Ω of events on space Ω and a probability measure P on F (i.e., P : F → [0, 1] is countable additive with P ≥ 0, P (Ω) = 1). We assume the probability space has been completed, i.e., every subset of a null set (i.e., P (A) = 0) is measurable. A property holds P -almost surely (a.s.) if it holds with probability 1. A filtration is a family (Ft )t≥0 of σ-algebras that is increasing, i.e., Fs ⊆ Ft for all s < t. Intuitively, Ft are the events that can be discriminated at time t. We always assume a filtration (Ft )t≥0 that has been completed to include all null sets and that is right-continuous, i.e., Ft = u>t Fu for all t. We generally assume the compatibility condition that F coincides with the σ-algebra F∞ := σ t≥0 Ft , i.e., the σ-algebra generated by all Ft . For a σ-algebra Σ on a set D and the Borel σ-algebra B on Rd , function f : D → Rd is measurable iff f −1 (B) ∈ Σ for all B ∈ B (or, equivalently, for all open B ⊆ Rd ). An Rd -valued random variable is an F-measurable function X : Ω → Rd . All sets and functions definable in first-order logic over real arithmetic are Borel-measurable. A stochastic process X is a collection {Xt }t∈T of Rd -valued random variables Xt indexed by some set T for time. That is, X : T × Ω → Rd is a function such that for all t ∈ T , Xt = X(t, ·) : Ω → Rd is a random variable. Process X is adapted to filtration (Ft )t≥0 if Xt is Ft measurable for each t. That is, the process does not depend on future events. We consider only adapted processes (e.g., using the completion of the natural filtration of a process or the completion of the optional σ-algebra for F [10]). A process X is c`dl`g iff its paths t → Xt (ω) (for each ω ∈ Ω) are c`dl`g a.s., i.e., a a a a right-continuous (lims t Xs (ω) = Xt (ω)) and left limits (lims t Xs (ω)) exist. We further need an e-dimensional Brownian motion W (i.e., W is a stochastic process starting at 0 that is almost surely continuous and has independent increments that are normally distributed with mean 0 and variance equal to the time difference). Brownian motion is mathematically extremely complex. Its paths are almost surely continuous everywhere but differentiable nowhere and of unbounded variation. Intuitively, W can be understood as the limit of a random walk. We denote the Euclidean vector norm by |x| and use the Frobenius norm |σ| :=
i,j 2 σij for matrices σ ∈ Rd×e .

3

Stochastic Differential Equations

We consider stochastic differential equations [19, 11] to describe stochastic continuous system dynamics. They are like ordinary differential equations but have an additional diffusion term that varies the state stochastically. Stochastic differential equations are of the form dXt = b(Xt )dt + σ(Xt )dWt . We consider It¯ o stochastic differential equations, whose solutions are defined by the stochastic

434

Andr´ Platzer e
2

It¯ integral [19, 11], which is again a stochastic o process. Like in an ordinary differential equa- 1 tion, the drift coefficient b(Xt ) determines the 0.2 0.4 0.6 0.8 1.0 deterministic part of how Xt changes over - 1 time as a function of its current value. As a function of Xt , the diffusion coefficient σ(Xt ) Fig. 1. Sample paths with b = 1 determines the stochastic influence by inte- (top) and b = 0 (bottom), σ = 1 gration with respect to the Brownian motion process Wt . See Fig. 1 for two sample paths. Ordinary differential equations are retained for σ = 0. We focus on the time-homogenous case, where b and σ are time-independent, because time could be added as an extra state variable. Definition 1 (Stochastic differential equation). A stochastic process X : [0, ∞) × Ω → Rd solves the (It¯) stochastic differential equation o dXt = b(Xt )dt + σ(Xt )dWt with X0 = Z, if Xt = Z + integral process [19, 11]. b(Xt )dt + σ(Xt )dWt , where (1) σ(Xt )dWt is an It¯ o

For simplicity, we always assume b : Rd → Rd and σ : Rd → Rd×e to be measurable and locally Lipschitz-continuous: ∀N ∃C∀x, y : |x|, |y| ≤ N ⇒ |b(x) − b(y)| ≤ C|x − y|, |σ(x) − σ(y)| ≤ C|x − y| As an integral of an a.s. continuous process, solution X has almost surely continuous paths [19]. A.s. continuous solution X is pathwise unique [11, Ch 4.5]. Process X is a strong Markov process for each initial value x [19, Theorem 7.2.4].

4

Stochastic Hybrid Programs

As a system model for stochastic hybrid system, we introduce stochastic hybrid programs (SHPs). SHPs combine stochastic differential equations for describing the stochastic continuous system dynamics with program operations to describe the discrete switching, jumps, and discrete stochastic choices. These primitive dynamics can be combined programmatically in flexible ways. All basic terms in stochastic hybrid programs and stochastic differential dynamic logic are polynomial terms built over real-valued variables and rational constants. Our approach is sound for more general settings, but first-order real arithmetic is decidable [26]. Syntax. Stochastic hybrid programs (SHPs) are formed by the following grammar (where xi is a variable, x a vector of variables, θ a term, b a vector of terms, σ a matrix of terms, H is a quantifier-free first-order real arithmetic formula, λ, ν ≥ 0 are rational numbers): α ::= xi := θ | xi := ∗ | ?H | dx = bdt + σdW & H | λα ⊕ νβ | α; β | α∗

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs

435

Assignment xi := θ deterministically assigns term θ to variable xi instantaneously. Random assignment xi := ∗ randomly updates variable xi , but unlike in classical dynamic logic [24], we assume a probability distribution for x. As one example for a probability distribution, we consider uniform distribution in the interval [0,1], but other distributions can be used as long as they are computationally tractable, e.g., definable in first-order real arithmetic. Most importantly, dx = bdt + σdW & H represents a stochastic continuous evolution along a stochastic differential equation, restricted to the evolution domain region H, i.e., the stochastic process will not continue when it leaves H. We assume that dx = bdt + σdW satisfies the assumptions of stochastic differential equations according to Def. 1. In particular, the dimensions of the vectors x, b, matrix σ, and (vectorial) Brownian motion W fit together and b, σ are globally Lipschitz-continuous (which is first-order definable for polynomial terms and, thus, decidable by quantifier elimination [26]). Test ?H represents a stochastic process that fails (disappears into an absorbing state) if H is not satisfied yet continues unmodified otherwise. Linear combination λα ⊕ νβ evolves like α in λ percent of the cases and like β otherwise. We simply assume λ + ν = 1. Sequential composition α; β and repetition α∗ work similarly to dynamic logic [24], except that they combine SHPs. Stochastic Process Semantics. The semantics of a SHP is the stochastic process that it generates. The semantics [[α]] of a SHP α consists of a function [[α]] : (Ω → Rd ) → ([0, ∞) × Ω → Rd ) that maps any Rd -valued random variable Z Z describing the initial state to a stochastic process [[α]] together with a function (|α|) : (Ω → Rd ) → (Ω → R) that maps any Rd -valued random variable Z Z Z describing the initial state to a stopping time (|α|) indicating when to stop [[α]] . Often, an F0 -measurable random variable Z or deterministic state is used to describe the initial state. We assume independence of Z from subsequent stochastic Z processes like Brownian motions occurring in the definition of [[α]] . ˆ For an Rd -valued random variable Z, we denote by Z the stochastic process ˆ : {0} × Ω → Rd ; (0, ω) → Z0 (ω) := Z(ω) that is stuck at Z. We write x for ˆ Z ˆ the random variable Z that is a deterministic state Z(ω) := x for all ω ∈ Ω. We x x Z Z write [[α]] and (|α|) for [[α]] and (|α|) then. In order to simplify notation, we assume that all variables are uniquely identified by an index, i.e., the only occurring variables are x1 , x2 , . . . , xd . We write Z(ω) |= H if state Z(ω) satisfies first-order real arithmetic formula H and Z(ω) |= H otherwise. In the semantics we will use a family of random variables {Ui }i∈I that are distributed uniformly in [0, 1] and independent of other Uj and all other random variables and stochastic processes in the semantics. Hence, s U satisfies P ({ω ∈ Ω : U (ω) ≤ s}) = −∞ I[0,1] dt with the usual extensions to other Borel subsets. To describe this situation, we just say that “U ∼ U(0, 1) is i.i.d. (independent and identically distributed)”, meaning that U is furthermore independent of all other random variables and stochastic processes in the semantics. We denote the characteristic function of a set S by IS , which is defined by IS (x) := 1 if x ∈ S and IS (x) := 0 otherwise.

436

Andr´ Platzer e

Definition 2 (Stochastic hybrid program semantics). The semantics of Z Z SHP α is defined by [[α]] : (Ω → Rd ) → ([0, ∞) × Ω → Rd ); Z → [[α]] = ([[α]]t )t≥0 Z and (|α|) : (Ω → Rd ) → (Ω → R); Z → (|α|) . These functions are inductively defined for random variable Z by
Z Z(ω) ˆ 1. [[xi := θ]] = Y where Y (ω)i = [[θ]] and Yj = Zj for all j = i. Further, Z (|xi := θ|) = 0. Z ˆ 2. [[xi := ∗]] = U where Uj = Zj for all j = i and Ui ∼ U(0, 1) is i.i.d. and Z F0 -measurable. Further, (|xi := ∗|) = 0. Z Z ˆ 3. [[?H]] = Z on the event {Z |= H} and (|?H|) = 0 (on all events ω ∈ Ω). Z Note that [[?H]] is not defined on the event {Z |= H}. Z 4. [[dx = bdt + σdW & H]] is the process X : [0, ∞) × Ω → Rd that solves the X X (It¯) stochastic differential equation dXt = [[b]] t dt + [[σ]] t dBt with X0 = Z o on the event {Z |= H}, where Bt is a fresh e-dimensional Brownian motion if σ has e columns. We assume that Z is independent of the σ-algebra generated Z by (Bt )t≥0 . Further, (|dx = bdt + σdW & H|) = inf{t ≥ 0 : Xt ∈ H}. Note that X is not defined on the event {Z |= H}.

5. [[λα ⊕ νβ]] = IU ≤λ [[α]] + IU >λ [[β]] =
Z Z Z

Z

Z

Z

[[α]] Z [[β]]

Z

on the event {U ≤ λ} on the event {U > λ}

(|λα ⊕ νβ|) = IU ≤λ (|α|) + IU >λ (|β|) where U ∼ U(0, 1) is i.i.d. and F0 -measurable.  Z [[α]]Z on {t < (|α|) } t [[α]]Z 6. Z Z Z Z and (|α; β|) = (|α|) +(|β|) (|α|)Z [[α; β]]t = [[α]](|α|)Z Z [[β]] on {t ≥ (|α|) } t−(|α|)Z 7. [[α∗ ]]Z = [[αn ]]Z on the event {(|αn |)Z > t} t t
Z

and

(|α∗ |) = lim (|αn |)
n→∞ Z

Z

Z

where α0 ≡ ?true, α1 ≡ α, and αn+1 ≡ α; αn .

For Case 7, note that (|αn |) is monotone in n, hence the limit (|α∗ |) exists and Z is finite if the sequence is bounded. The limit is ∞ otherwise. Note that [[α∗ ]]t n Z is independent of the choice of n on the event {(|α |) > t} (but not necessarily Z independent of n on the event {(|αn |) ≥ t}, because α might start with a jump Z Z after αn ). Observe that [[α∗ ]]t is not defined on the event {∀n (|αn |) ≤ t}, which happens, e.g., for Zeno executions violating divergence of time. It would still be Z possible to give a semantics in this case, e.g., at t = (|αn |) , but we do not gain much from introducing those technicalities. Z Z In the semantics of [[α]] , time is allowed to end. We explicitly consider [[α]]t as not defined for a realization ω if a part of this process is not defined, because Z of failed tests in α. The process may be explicitly not defined when t > (|α|) . Explicitly being not defined can be viewed as being in a special absorbing state Z that can never be left again, as in killed processes. The stochastic process [[α]] is Z Z Z only intended to be used until time (|α|) . We stop using [[α]] after time (|α|) .

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs

437

A Markov time (a.k.a. stopping time) is a non-negative random variable τ such that {τ ≤ t} ∈ Ft for all t. For a Markov time τ and a stochastic process Xt , the following process is called stopped process X τ
τ Xt := Xt τ

=

Xt Xτ

if t < τ if t ≥ τ

where

t

τ := min{t, τ }

A class C of processes is stable under stopping if X ∈ C implies X τ ∈ C for every Markov time τ . Right continuous adapted processes, and processes satisfying the strong Markov property are stable under stopping [5, Theorem 10.2]. Most importantly, we show that the semantics is well-defined. We prove that Z the natural stopping times (|α|) are actually Markov times so that it is meanZ Z Z ingful to stop process [[α]] at (|α|) and useful properties of [[α]] inherit to Z Z the stopped process [[α]]t (|α|)Z . Furthermore, we show that the process [[α]] is adapted (does not look into the future) and c`dl`g, which will be important to a a define a semantics for formulas. We give a proof of the following theorem in [22]. Theorem 3 (Adaptive c`dl`g process with Markov times). For each a a Z d SHP α and any R -valued random variable Z, [[α]] is an a.s. c`dl`g process and a a adapted (to the completed filtration (Ft )t≥0 generated by Z and the constituent Z Brownian motion (Bs )s≤t and uniform U processes) and (|α|) is a Markov time Z (for (Ft )t≥0 ). In particular, the end value [[α]](|α|)Z is again F(|α|)Z -measurable. Note in particular, that the event {(|αn |) ≥ t} is Ft -measurable, thus, by [10, Z Prop 1.2.3], the event {(|αn |) > t} in Case 7 of Def. 2 is Ft -measurable. As a Z corollary to Theorem 3, [[α]] is progressively measurable [10, Prop 1.1.13].
Z

5

Stochastic Differential Dynamic Logic
0 ≡ I∅ 1 ≡ IRd ¬f ≡ 1 − f A ∧ B ≡ AB

For specifying and analyzing properties of SHPs, we introduce stochastic differential dynamic logic SdL.

A ∨ B ≡ A + B − AB Syntax. Function terms of A → B ≡ 1 − A + AB stochastic differential dynamic 1 1 logic SdL are formed by the if(H) {α}else{β} ≡ (?H; α) ⊕ (?¬H; β) grammar (F is a primitive mea2 2 ∗ surable function definable in while(H) {α} ≡ (?H; α) ; ?¬H first-order real arithmetic, e.g., [α]f ≡ ¬ α ¬f the characteristic function IS of a measurable set S definable in Fig. 2. Common SdL and SHP abfirst-order real arithmetic, B is breviations

438

Andr´ Platzer e

a boolean combination of such characteristic functions using operators ∧, ∨, ¬, → from Fig. 2, λ, ν are rational numbers): f, g ::= F | λf + νg | Bf | α f These are for linear (λf + νg) or boolean product (Bf ) combinations of terms. Term α f represents the supremal value of f along the process belonging to α. The syntactic abbreviations in Fig. 2 can be useful. Formulas of SdL are simple, because SdL function terms are powerful. SdL formulas express equational and inequality relations between SdL function terms f, g. They are of the form: φ ::= f ≤ g | f = g Measurable Semantics. The semantics of classical logics maps an interpretation to a truth-value. This does not work for stochastic logic, because the state evolution of SHPs contained in SdL formulas is stochastic, not deterministic. Instead, we define the semantics of an SdL function term as a random variable. Definition 4 (SdL semantics). The semantics [[f ]] of a function term f is a function [[f ]] : (Ω → Rd ) → (Ω → R) that maps any Rd -valued random variable Z Z describing the current state to a random variable [[f ]] . It is defined by 1. 2. 3. 4. [[F ]] = F (Z), i.e., [[F ]] (ω) = F (Z(ω)) where function F denotes F Z Z Z [[λf + νg]] = λ[[f ]] + ν[[g]] Z Z Z Z Z Z [[Bf ]] = [[B]] ∗ [[f ]] , i.e., multiplication [[Bf ]] (ω) = [[B]] (ω) ∗ [[f ]] (ω) Z Z [[α]] Z [[ α f ]] = sup{[[f ]] t : 0 ≤ t ≤ (|α|) }
Z Z Z

When Z is not defined (results from a failed test), then [[f ]] is not defined. To Z avoid partiality, we assume the convention [[f ]] := 0 when Z is not defined. Z If f is a characteristic function of a measurable set, then [[ α f ]] corresponds to a random variable that reflects the supremal f value that α can reach at least Z once during its evolution until stopping time (|α|) when starting in a state Z corresponding to random variable Z. Then P ([[ α f ]] = 1) is the probability Z with which α reaches f at least once and E([[ α f ]] ) is the expected value, given Z. This includes the special case where Z is a deterministic state Z(ω) := x for all ω ∈ Ω. But first, we prove that these quantities are well-defined probabilities at all. Note that well-definedness of the definition in case 4 uses Theorem 3. Cases 1–3 of Def. 4 are as in [14] with the notable exception of case 4, which we define as a supremum, not an integral. The reason is that we are interested in probabilistic worst-case verification, not in average-case verification. For discrete programs, it is often sufficient to consider the input-output behavior, so that Kozen did not need to consider the temporal evolution of the program over time, only its final (probabilistic) outcome [14]. In stochastic hybrid systems, the temporal evolution is highly relevant, in addition to the stochastic behavior. When averaging over time, the system state may be very probably good (the integral of the error is small). But, still, it could be very likely that the system

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs

439

exhibits a bug at some state during a run. In this case, we would still want to declare such a system as broken, because, when using it, it will very likely get us into trouble. Stochastic average-case analysis is interesting for performance analysis. But for safety verification, supremal stochastic analysis is more relevant, because a system that is very probably broken at some time, is still too broken to be used safely. We thus consider stochastic dynamics with worst-case temporal behavior, i.e., our semantics performs stochastic averaging (in the sense of probability) among different behaviors, but considers supremal worst-case probability over time. The logic SdL is intended to be used (among other things) to prove bounds on the probability that a system fails at some point at all. A car that, on average over all times of its use, has a low crash rate, but still has a high probability of crashing at least once during the first ride would not be safe. This is one example where stochastic hybrid systems exhibit new interesting characteristics that we do not see in discrete systems. Z We show that the semantics is well-defined. We prove that [[f ]] is, indeed, a random variable, i.e., measurable. Without this, probabilistic questions about the value of formulas would not be well-defined, because they are not measurable with respect to the probability space (Ω, F, P ) and the Borel σ-algebra on R. Theorem 5 (Measurability). For any Rd -valued random variable Z, the seZ mantics [[f ]] of function term f is a random variable (i.e., F-measurable). We give a proof of this theorem in [22]. Corollary 6 (Pushforward measure). For any Rd -valued random variable Z and function term f , probability measure P induces the pushforward measure S → P (([[f ]] )−1 (S)) = P ({ω ∈ Ω : [[f ]] (ω) ∈ S}) = P ([[f ]] ∈ S) which defines a probability measure on R. Hence, for each Borel-measurable set Z S, the probability P ([[f ]] ∈ S) is well-defined. We say that f ≤ g is valid if it holds for all Rd -valued random variables Z: f ≤g iff for all Z, [[f ]] ≤ [[g]] , i.e., ([[f ]] )(ω) ≤ ([[g]] )(ω) for all ω ∈ Ω
Z Z Z Z Z Z Z

Validity of f = g is defined accordingly, hence, f = g iff f ≤ g and g ≤ f . As consequence relation on formulas, we use the (global) consequence relation that we define as follows (similarly when some of the formulas are fi = gi ): f1 ≤ g1 , . . . , fn ≤ gn iff f1 ≤ g1 , . . . , f ≤g f ≤g

fn ≤ gn implies

Also f1 ≤ g1 , . . . , fn ≤ gn

f ≤ g holds pathwise if it holds for each ω ∈ Ω.

6

Stochastic Calculus

In this section, we review important results from stochastic calculus [10, 19, 11] that we use in our proof calculus. To indicate the probability law of process X

440

Andr´ Platzer e

starting at X0 = x a.s., we write P x instead of P . By E x we denote the expectation operator for probability law P x . That is E x (f (Xt )) := Ω f (Xt (ω))dP x (ω) for each Borel-measurable function f : Rd → R. A very important concept is the infinitesimal generator that captures the average rate of change of a process. Definition 7 (Infinitesimal generator). The (infinitesimal) generator of an a.s. right continuous strong Markov process (e.g., solution from Def. 1) is the operator A that maps a function f : Rd → R to function Af : Rd → R defined as Af (x) := lim
t

E x f (Xt ) − f (x) 0 t

We say that Af is defined if this limit exists for all x ∈ Rd . The generator can be used to compute the expected value of a function when following the process until a Markov time without solving the SDE. Theorem 8 (Dynkin’s formula [19, Theorem 7.4.1],[5, p. 133]). Let Xt an a.s. right continuous strong Markov process (e.g., solution from Def. 1). If f ∈ C 2 (Rd , R) has compact support and τ is a Markov time with E x τ < ∞, then
τ

E x f (Xτ ) = f (x) + E x
0

Af (Xs )ds

Dynkin’s formula is very useful, but only if we can compute the generator and its integral. The generator A gives a stochastic expression. It has been shown, however, that it is equal to a deterministic expression called the differential generator under fairly mild assumptions: Theorem 9 (Differential generator [19, Theorem 7.3.3]). For a solution Xt from Def. 1, if f ∈ C 2 (Rd , R) is compactly supported, then Af is defined and Af (x) = Lf (x) :=
i

bi (x)

∂f 1 (x) + ∂xi 2

(σ(x)σ(x)∗ )i,j
i,j

∂2f (x) ∂xi ∂xj

A stochastic process Y that is adapted to a filtration (Ft )t≥0 is a supermartingale iff E|Yt | < ∞ for all t ≥ 0 and E(Yt | Fs ) ≤ Ys for all t ≥ s ≥ 0

Proposition 10 (Doob’s maximal martingale inequality [10, Theorem I.3.8]). If f (Xt ) is a c`dl`g supermartingale with respect to the filtration gena a erated by (Xt )t≥0 and f ≥ 0 on the evolution domain of Xt , then for all λ > 0: P sup f (Xt ) ≥ λ | F0
t≥0



Ef (X0 ) λ

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs
θ x := θ f = fx

441 ( := ) (?) (;) (; ) ( λ) ( +) (I) (pos) (mon)

if admissible substitution replacing x with θ β f) if ≤ α (f + β f ) if 0 ≤ f f ≤ β f or β continuous at 0 a.s.

?H f = Hf α; β f ≤ α (f α; β f ≤ α β f α (λf ) = λ α f α (λf + νg) ≤ λ α f + ν α g 0 ≤ B = BB ≤ 1 0≤f f ≤g H→f ≤λ α g≤g 0≤ α f α f≤ α g dx = bdt + σdW & H f ≤ λ α g≤g Fig. 3. Pathwise proof rules for SdL


if B boolean from characteristic functions

(λ ∈ Q)

(mon ) (ind )

7

Proof Calculus

Now that we have a model, logic, and semantics for stochastic hybrid systems, we investigate reasoning principles that can be used to prove logical properties of stochastic hybrid systems. First we present proof rules that are sound pathwise, i.e., satisfy the global consequence relation pathwise for each ω ∈ Ω. By we denote the binary maximum operator. It can either be added into the language or approximated conservatively by + as in rule ; . Operator coincides with ∨ for values in {0,1}, e.g., built using operators ∧, ∨, ¬, α from characteristic functions. As a supremum, α B only takes on values {0,1} if B does. Theorem 11 (Pathwise sound). The proof rules in Fig. 3 are globally sound pathwise. For a proof see [22]. For ; , β is a.s. continuous at 0 if, on all paths, the first primitive operation that is not a test is a stochastic differential equation, not a (random) assignment. Our rules generalize to the case of probabilistic assumptions. Note that formula H → f ≤ λ in mon is equivalent to Hf ≤ Hλ but easier to read. If f is continuous, rule mon is sound when replacing the topological closure H (which is computable by quantifier elimination) by H, because the inequality is weak. Next we show proof rules that do not hold pathwise, but still in distribution. Theorem 12 (Sound in distribution). Rule ⊕ is sound in distribution. P ( λα ⊕ νβ f ∈ S) = λP ( α f ∈ S) + νP ( β f ∈ S) (⊕)

For a proof see [22]. How to prove properties about random assignment xi := ∗ depends on the distribution for the random assignment. For a uniform distribution in [0,1], e.g., we obtain the following proof rule that is sound in distribution:
1

P ( xi := ∗ f ∈ S) =
0

I

xi:=r f ∈S dr

(∗)

442

Andr´ Platzer e

The integrand is measurable for measurable S by Corollary 6. The rule is applicable when f has been simplified enough using other proof rules such that the integral can be computed after using := to simplify the integrand. Theorem 13 (Soundness for stochastic differential equations). If function f ∈ C 2 (Rd , R) has compact support on H (which holds for all f ∈ C 2 (Rd , R) if H represents a bounded set), then the proof rule is sound for λ > 0, p ≥ 0 ( ) α (H → f) ≤ λp H → f ≥ 0 H → Lf ≤ 0 P ( α dx = bdt + σdW & H f ≥ λ) ≤ p

Proof. Since f has compact support on H, it has a C 2 (Rd , R) modification with compact support on Rd that still satisfies the premises of , because all properties of f in the premises assume H. To simplify notation, we write f(x) for x Z ˇ [[f]] . Let Xt be the stochastic process [[dx = bdt + σdW & H]] . Let Xt be Xt ˇ t := X restricted to H, i.e., the stopped process X t (|dx=bdt+σdW & H|)Z , which is ˇ stopped at a Markov time by Theorem 3. The stopped process Xt , thus, inherits c`dl`g and strong Markov properties from Xt ; see, e.g., [5, Theorem 10.2]. If Af a a is defined and continuous and bounded on H [5, Ch 11.3][15, Ch I.3,I.4], then ˇ the infinitesimal generator of Xt agrees with the generator of Xt on H (and is zero otherwise). This is the case, since f ∈ C 2 (Rd , R) has compact support (thus bounded as continuous), because Af is then defined and Af = Lf by Theorem 9, hence, Lf is continuous, because b, σ are continuous by Def. 1. All premises of rule still hold when assuming the topological closure H instead of H, because the functions f and Lf are continuous and the conditions are weak inequalities, thus, closed. Consider any x ∈ Rd and any time s ≥ 0. The deterministic time s is a (very simple) Markov time with E x s = s < ∞. Since f is compactly supported, Theorem 8 is applicable and implies that
s

ˇ E x f(Xs ) = f(x) + E x
0

ˇ Af(Xr )dr

(2)

Now Lf ≤ 0 on H by the third premise. Hence, Af ≤ 0 on H, because Lf = Af (on ˇ H) by Theorem 9, as f ∈ C 2 (Rd , R) has compact support. Because X and X have a.s. continuous paths and are not defined on the event {Z |= H}, we know that s ˇ ˇ ˇ Xs stays in the closure H a.s. Thus, Af(Xs ) ≤ 0 a.s., hence, 0 Af(Xr )dr ≤ 0 x s ˇ r )dr ≤ 0. Then (2) implies E x f(Xs ) ≤ f(x) for all x. ˇ a.s., thus, E 0 Af(X Because the filtration is right-continuous and f ∈ C (Rd , R) is compactly supˇ ported (hence bounded), the strong Markov property [10, Prop 2.6.7] for Xt imˇ x x ˇ t )|Fs ) = E Xs f(Xt−s ) ≤ f(Xs ). The inˇ ˇ plies for all t ≥ s ≥ 0 that P -a.s.: E (f(X ˇ ˇ equality holds, since E x f(Xs ) ≤ f(x) for all x, s. Thus, f(Xt ) is a supermartingale ˇ t , because it is adapted to the filtration of Xt (as f ∈ C 2 (Rd , R)) ˇ with respect to X x ˇ t )| < ∞ for all t since f ∈ C 2 (Rd , R) has compact support. Further, and E |f(X ˇ ˇ f(Xt ) inherits continuity from Xt (which follows from Xt ), since f is continuous. Thus, by the second premise, Proposition 10 is applicable. Consider any iniZ ˇ ˇ tial state Y := [[α]]t for X. Thus, P supt≥0 f(Xt ) ≥ λ | F0 ≤ Ef(Y ) by Proposiλ ˇ ˇ tion 10 (filtration at X0 is F0 ). On event {Y |= H}, X is not defined and nothing

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs

443

to show. On {Y |= H}, f(Y ) ≤ λp is valid where relevant by the first premise. Y ˇ This implies the conclusion, as [[ dx = bdt + σdW & H f]] = supt≥0 f(Xt ). The implications in the premises can be understood like that in mon . Let H be given by first-order real arithmetic formulas. If f is polynomial and, thus, f ∈ C 2 (Rd , R), then the second and third premise of are in first-order real arithmetic, hence decidable. Note that our proof rules can be generalized to probabilistic assumptions by the rule of partition and then combined. The proof shows that it is enough to assume the first premise holds only a.s. From the proof we see that it would be sufficient to replace the third premise of s with 0 Lf(Xr )dr ≤ 0. This is a weaker condition, because it does not require Lf ≤ 0 always, but only “on average”. But this condition is computationally more involved, because the integral needs to be computed first. For polynomial expressions, this is not too difficult, but still increases the polynomial degree. A simple two-dimensional example is the following for H ≡ x2 + y 2 < 10: x −x −y 1 1 dt − ydW, dy = dt + xdW & H x2 +y 2 ≥1) ≤ P ( ?x2 +y 2 ≤ ; x := ; dx = 3 2 2 2 3 which can be proven easily using ; , ; ? , := , Lf = 1 2 −x , since f ≡ x2 + y 2 ≥ 0 and ≤0

∂f ∂f ∂2f ∂2f ∂2f −y + y 2 2 − 2xy + x2 2 ∂x ∂y ∂x ∂x∂y ∂y

8

Related Work

Our approach is partially inspired by the work of Kozen, who studied 3 semantics of programs with random number generators [13] and probabilistic PDL [14]. We generalize from discrete systems to stochastic hybrid systems. To reflect the new challenges, we have departed from probabilistic PDL. Kozen uses a measure semantics. We choose a semantics that is based on stochastic processes, because the temporal behavior of SHPs is more crucial than that of abstract discrete programs. SdL further uses a supremal semantics that is more interesting for stochastic worst-case verification than the integral semantics assumed in [14]. The comparison to a first-order dynamic logic for deterministic programs with random number generators [6] is similar. They axiomatize relative to firstorder analysis with arithmetic, enriched with frequencies and random number generators. They do not show how this logic could be handled (incompletely). Our approach for stochastic differential equations is inspired by barrier certificates [23]. We extend this work by identifying the assumptions that are required for soundness of using Dynkin-type arguments for stochastic differential equations. They propose to use global generators for switching diffusion processes (which cannot reset variables). We use logic and compositional proofs for SHPs. Probabilities and logic have also been used in AI, e.g., [25]. Markov logic networks are a combination of Markov networks and first-order logic and resembles logic programming with weights for probabilities. They are restricted to finite domains, which is not the case in stochastic hybrid systems.

444

Andr´ Platzer e

Model checking has been used for discrete probabilistic systems like finite Markov chains, e.g., [27], and probabilistic timed automata [17]. Assume-guarantee model checking is a challenge for discrete probabilistic automata, with recent successes for finite automata assumptions [16]. We use a compositional proof approach based on logic and consider stochastic hybrid systems. Statistical model checking has been suggested for validating stochastic hybrid systems [18] and later refined for discrete-time hybrid systems with a probabilistic simulation function [28] based on corresponding discrete probabilistic techniques [27]. They did not show measurability and do not support stochastic differential equations [28]. Validation by simulation is generally unsound, but the probability of giving a wrong answer can sometimes be bounded [27, 28]. Fr¨nzle et al. [7] show first pieces for continuous-time bounded model checka ing of probabilistic hybrid automata (no stochastic differential equations). Bujorianu and Lygeros [2] show strong Markov and c`dl`g properties for a a a class of systems known as General Stochastic Hybrid Systems. They also study an interesting concatenation operator. For an overview of model checking techniques for various classes of stochastic hybrid systems, we refer to [3]. Most verification techniques for stochastic hybrid systems use discretizations, approximations, or assume discrete time, bounded horizon [12, 3, 1, 9]. We consider the continuous-time behavior and develop compositional logic and theorem proving.

9

Conclusions

We introduce the first verification logic for stochastic hybrid systems along with a compositional model of stochastic hybrid programs. We prove theoretical properties that are important for well-definedness and measurability and we develop a compositional proof calculus. Our logic makes the complexity of stochastic hybrid systems accessible in logic with simple syntactic proof principles. Our results indicate that SdL is a promising starting point for the study of logic for stochastic hybrid systems. Extensions include nondeterminism. Acknowledgements. I thank the anonymous referees for their good comments and Steve Marcus and Sergio Pulido Ni˜o for helpful discussions. n

References
1. Abate, A., Prandini, M., Lygeros, J., Sastry, S.: Probabilistic reachability and safety for controlled discrete time stochastic hybrid systems. Automatica 44(11), 2724–2734 (2008) 2. Bujorianu, M.L., Lygeros, J.: Towards a general theory of stochastic hybrid systems. In: Blom, H.A.P., Lygeros, J. (eds.) Stochastic Hybrid Systems: Theory and Safety Critical Applications, Lecture Notes Contr. Inf., vol. 337, pp. 3–30. Springer (2006) 3. Cassandras, C.G., Lygeros, J. (eds.): Stochastic Hybrid Systems. CRC (2006) 4. Dutertre, B.: Complete proof systems for first order interval temporal logic. In: LICS. pp. 36–43. IEEE Computer Society (1995)

Stochastic Differential Dynamic Logic for Stochastic Hybrid Programs

445

5. Dynkin, E.B.: Markov Processes. Springer (1965) 6. Feldman, Y.A., Harel, D.: A probabilistic dynamic logic. J. Comput. Syst. Sci. 28(2), 193–215 (1984) 7. Fr¨nzle, M., Teige, T., Eggers, A.: Engineering constraint solvers for automatic a analysis of probabilistic hybrid automata. J. Log. Algebr. Program. 79(7), 436–466 (2010) 8. Ghosh, M.K., Arapostathis, A., Marcus, S.I.: Ergodic control of switching diffusions. SIAM J. Control Optim. 35(6), 1952–1988 (1997) 9. Hu, J., Lygeros, J., Sastry, S.: Towards a theory of stochastic hybrid systems. In: Lynch, N.A., Krogh, B.H. (eds.) HSCC. LNCS, vol. 1790, pp. 160–173. Springer (2000) 10. Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus. Springer (1991) 11. Kloeden, P.E., Platen, E.: Numerical Solution of Stochastic Differential Equations. Springer, New York (2010) 12. Koutsoukos, X.D., Riley, D.: Computational methods for verification of stochastic hybrid systems. IEEE T. Syst. Man, Cy. A 38(2), 385–396 (2008) 13. Kozen, D.: Semantics of probabilistic programs. J. Comput. Syst. Sci. 22(3), 328– 350 (1981) 14. Kozen, D.: A probabilistic PDL. J. Comput. Syst. Sci. 30(2), 162–178 (1985) 15. Kushner, H.J.: Stochastic Stability and Control. Academic Press (1967) 16. Kwiatkowska, M.Z., Norman, G., Parker, D., Qu, H.: Assume-guarantee verification for probabilistic systems. In: Esparza, J., Majumdar, R. (eds.) TACAS. LNCS, vol. 6015, pp. 23–37. Springer (2010) 17. Kwiatkowska, M.Z., Norman, G., Sproston, J., Wang, F.: Symbolic model checking for probabilistic timed automata. Inf. Comput. 205(7), 1027–1077 (2007) 18. Meseguer, J., Sharykin, R.: Specification and analysis of distributed object-based stochastic hybrid systems. In: Hespanha, J.P., Tiwari, A. (eds.) HSCC. LNCS, vol. 3927, pp. 460–475. Springer (2006) 19. Øksendal, B.: Stochastic Differential Equations: An Introduction with Applications. Springer (2007) 20. Platzer, A.: Differential-algebraic dynamic logic for differential-algebraic programs. J. Log. Comput. 20(1), 309–352 (2010) 21. Platzer, A.: Quantified differential dynamic logic for distributed hybrid systems. In: Dawar, A., Veith, H. (eds.) CSL. LNCS, vol. 6247, pp. 469–483. Springer (2010) 22. Platzer, A.: Stochastic differential dynamic logic for stochastic hybrid systems. Tech. Rep. CMU-CS-11-111, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA (2011) 23. Prajna, S., Jadbabaie, A., Pappas, G.J.: A framework for worst-case and stochastic safety verification using barrier certificates. IEEE T. Automat. Contr. 52(8), 1415– 1429 (2007) 24. Pratt, V.R.: Semantical considerations on Floyd-Hoare logic. In: FOCS. pp. 109– 121. IEEE (1976) 25. Richardson, M., Domingos, P.: Markov logic networks. Machine Learning 62(1-2), 107–136 (2006) 26. Tarski, A.: A Decision Method for Elementary Algebra and Geometry. University of California Press, Berkeley, 2nd edn. (1951) 27. Younes, H.L.S., Kwiatkowska, M.Z., Norman, G., Parker, D.: Numerical vs. statistical probabilistic model checking. STTT 8(3), 216–228 (2006) 28. Zuliani, P., Platzer, A., Clarke, E.M.: Bayesian statistical model checking with application to Simulink/Stateflow verification. In: Johansson, K.H., Yi, W. (eds.) HSCC. pp. 243–252. ACM (2010)