Visible to the public A Trustable Autonomous Systems Lifecycle

The military requires flexible unmanned cyber-physical systems that can exhibit autonomous decision making and both obey rules of engagement and operate within a verifiable behavior safety envelope. We currently lack methods to provide assurance that such systems will operate reliably and with integrity in their operating environment as they continue to learn how to adapt to new situations. We have developed an architecture and an autonomous systems verification and validation approach based, in part, on the new discipline of software intent specifications.

This poster addresses the Assurance for AI theme, i.e., assurance for systems that learn.

--

Dr. Howard Reubenstein is a Section Leader and Principal Investigator at BAE Systems Technology Solutions. Dr. Reubenstein's research experience is in the area of AI technologies applied to high assurance software engineering problems including research on the application of software reverse engineering tools and their use in understanding software architectures. He is currently the PI for the RINGS (Regenerative INtent Guided Systems) project under DARPA's BRASS (Building Resource Adaptive Software Systems) effort. He was the (successor) PI and the software engineering and demonstration lead for the SAFE secure host computing project under DARPA's CRASH program. As software lead he was responsible for combining and deploying the project's security mechanisms in application demonstrations that illustrate the overall security provided by the SAFE platform.

License: 
Creative Commons 2.5

Other available formats:

A Trustable Autonomous Systems Lifecycle
Switch to experimental viewer