Skip to Main Content Area
CPS-VO
Contact Support
Browse
Calendar
Announcements
Repositories
Groups
Search
Search for Content
Search for a Group
Search for People
Search for a Project
Tagcloud
› Go to login screen
Not a member?
Click here to register!
Forgot username or password?
Cyber-Physical Systems Virtual Organization
Read-only archive of site from September 29, 2023.
CPS-VO
»
Projects
CPS: Synergy: Collaborative Research: Autonomy Protocols: From Human Behavioral Modeling to Correct-By-Construction, Scalable Control
View
Submitted by bacikmese on Fri, 07/21/2017 - 2:03pm
Project Details
Lead PI:
Behcet Acikmese
Performance Period:
01/01/16
-
09/30/18
Institution(s):
University of Washington
Sponsor(s):
National Science Foundation
Award Number:
1624328
787 Reads. Placed 478 out of 804 NSF CPS Projects based on total reads on all related artifacts.
Abstract:
Computer systems are increasingly coming to be relied upon to augment or replace human operators in controlling mechanical devices in contexts such as transportation systems, chemical plants, and medical devices, where safety and correctness are critical. A central problem is how to verify that such partially automated or fully autonomous cyber-physical systems (CPS) are worthy of our trust. One promising approach involves synthesis of the computer implementation codes from formal specifications, by software tools. This project contributes to this "correct-by-construction" approach, by developing scalable, automated methods for the synthesis of control protocols with provable correctness guarantees, based on insights from models of human behavior. It targets: (i) the gap between the capabilities of today's hardly autonomous, unmanned systems and the levels of capability at which they can make an impact on our use of monetary, labor, and time resources; and (ii) the lack of computational, automated, scalable tools suitable for the specification, synthesis and verification of such autonomous systems. The research is based on study of modular reinforcement learning-based models of human behavior derived through experiments designed to elicit information on how humans control complex interactive systems in dynamic environments, including automobile driving. Architectural insights and stochastic models from this study are incorporated with a specification language based on linear temporal logic, to guide the synthesis of adaptive autonomous controllers. Motion planning and other dynamic decision-making are by algorithms based on computational engines that represent the underlying physics, with provision for run-time adaptation to account for changing operational and environmental conditions. Tools implementing this methodology are validated through experimentation in a virtual testing facility in the context of autonomous driving in urban environments and multi-vehicle autonomous navigation of micro-air vehicles in dynamic environments. Education and outreach activities include involvement of undergraduate and graduate students in the research, integration of the research into courses, demonstrations for K-12 students, and recruitment of research participants from under-represented demographic groups. Data, code, and teaching materials developed by the project are disseminated publicly on the Web.
Related Artifacts
Presentations
Autonomy Protocols: From Human Behavioral Modeling to Correct-by-Construction Scaleable Control
|
Download
CPS: Synergy: Collaborative Research: Autonomy Protocols: From Human Behavioral Modeling to Correct-By-Construction, Scalable Co
|
Download
Posters
Autonomy Protocols: From Human Behavioral Modeling to Correct-by-Construction, Scalable Control
|
Download
Autonomy Protocols- From Human Behavioral Modeling to Correct-by-Construction, Scalable Control Poster.pdf
|
Download
Publications
Online Learning for Markov Decision Processes Applied to Multi-Agent Systems
Markov Decision Processes with Sequentially-Observed Transitions
Robust {Metropolis-Hastings} Algorithm for Safe Reversible Markov Chain Synthesis
Modular Reinforcement Learning with Discounting
Visual Attention Guided Deep Imitation Learning
Probabilistic Model Checking of Partially Controlled Multi-agent Systems
Modeling Multi-Objective Behavior through Modular Inverse Reinforcement Learning with Discount Factors
Attention Guided Deep Imitation Learning
Controlled {Markov} Processes with Safety State Constraints
Safe Markov Chains for Density Control of ON/OFF Agents with Observed Transitions
Distributed Averaging with Quantized Communication over Dynamic Graphs
The Discrete-Time {A}ltafini Model of Opinion Dynamics with Communication Delays and Quantization
Necessary and sufficient conditions for distributed averaging with state constraints
Velocity Field Generation for Density Control of Swarms using Heat Equation and Smooth Kernels
1 attachment
PDF version
Printer-friendly version
Architectures
Control
Modeling
CPS Technologies
Education
Foundations