Visible to the public Belief-Aware Cyber-Physical Systems

During the development process of CPS, an analysis of whether the system operates safely in its target environment is of utmost importance. For many applications of CPS research such as the transportation industry, this implies interconnected research in formal verification of CPS with research on knowledge representation and reasoning in multi-agent systems. The need for such research has become tragically clear in transportation accidents, one notorious case being the Air France 447 flight incident.

The analysis of the crash released by the French aviation authorities makes it clear that the crash oc-curred only partially because of mechanical failure. Indeed, the only mechanical problem was the temporary malfunction of the speed sensors. Neutral inputs would have kept the plane safe.

However, the pilots, having been surprised, interpreted the data available to them differently. In CPS terms, their knowledge states differed, and were, in fact, inconsistent. One appears to have believed the plane was overspeeding, while the other that it was stalling, often caused by insufficient speed.

Each pilot then performed a rational sequence of actions that would've kept the plane safe, assuming their own perception of the world to be the true one. In conjunction, their actions negated each other and resulted in the plane remaining in a stall situation until impact. In CPS terms, their differing goals in the process of keeping the plane safe resulted in a failure of the system, not of the individual pilot.

So far, this project has focused on developing the theory that will enable the practice of verifying systems such as the one described above. The extension to knowledge-awareness will be the next step in transforming theoretical promises of CPS safety into practical reality. As a crucial step in this process, we must prove that the logic can be reasoned with purely symbolically, such that humans, with or without computer aid, can reasonably be expected to tackle the challenge of verifying such complex systems.

Over the past few years, we developed a formal understanding of most operators of changing knowl-edge: we could reason symbolically about almost all components of knowledge-aware controllers. One major exception, until recently, had been knowledge-assignment, whereby an agent's perceptions of the world could effectively change. The challenge in addressing knowledge-assignment stems from perceptions being multi-valued, in that many are considered possible at the same time, whereas conventional assign-ment is single-valued: there is only one true physical world at a time. To tackle knowledge assignment, we proved the principle of "assignment as substitution" in a multi-valued domain. This is already known to be challenging in single-valued domains, and the challenge only grew. As a stepping stone, we first proved this principle for a limited, but still useful form of assignment, and were then able to extend the results to assignments in their full generality.

It is now possible to write belief-triggered controllers which closely follow the behavior of agents, human or otherwise, in CPS. This includes the crucial step of noise-prone and potentially erroneous infor-mation gathering through sensor usage, reasoning explicitly about knowledge, and finally, safe, knowledge-based decision making.

The current focus of the project has switched to verification of such full-loop systems.

License: 
Creative Commons 2.5

Other available formats:

Belief-Aware Cyber-Physical Systems
Switch to experimental viewer