Abstract:
How much should a person be allowed to interact with a controlled machine? If that machine is easily destabilized, and if the controller operating it is essential to its operation, the answer may be that the person should not be allowed any control authority at all. Using a combination of techniques coming from machine learning, optimal control, and formal verification, the proposed work focuses on a computable notion of trust that allows the embedded system to assess the safety of instruction.