Visible to the public CPS: Synergy: Collaborative Research: Learning control sharing strategies for assistive cyber-physical systemsConflict Detection Enabled

Project Details
Lead PI:Siddhartha Srinivasa
Performance Period:06/01/17 - 09/30/19
Institution(s):University of Washington
Sponsor(s):National Science Foundation
Award Number:1745561
640 Reads. Placed 575 out of 804 NSF CPS Projects based on total reads on all related artifacts.
Abstract: CPS: Synergy: Collaborative Research: Learning control sharing strategies for assistive cyber-physical systems Assistive machines - like powered wheelchairs, myoelectric prostheses and robotic arms - promote independence and ability in those with severe motor impairments. As the state- of-the-art in these assistive Cyber-Physical Systems (CPSs) advances, more dexterous and capable machines hold the promise to revolutionize ways in which those with motor impairments can interact within society and with their loved ones, and to care for themselves with independence. However, as these machines become more capable, they often also become more complex. Which raises the question: how to control this added complexity? A new paradigm is proposed for controlling complex assistive Cyber-Physical Systems (CPSs), like robotic arms mounted on wheelchairs, via simple low-dimensional control interfaces that are accessible to persons with severe motor impairments, like 2-D joysticks or 1-D Sip-N-Puff interfaces. Traditional interfaces cover only a portion of the control space, and during teleoperation it is necessary to switch between different control modes to access the full control space. Robotics automation may be leveraged to anticipate when to switch between different control modes. This approach is a departure from the majority of control sharing approaches within assistive domains, which either partition the control space and allocate different portions to the robot and human, or augment the human's control signals to bridge the dimensionality gap. How to best share control within assistive domains remains an open question, and an appealing characteristic of this approach is that the user is kept maximally in control since their signals are not altered or augmented. The public health impact is significant, by increasing the independence of those with severe motor impairments and/or paralysis. Multiple efforts will facilitate large-scale deployment of our results, including a collaboration with Kinova, a manufacturer of assistive robotic arms, and a partnership with Rehabilitation Institute of Chicago. The proposal introduces a formalism for assistive mode-switching that is grounded in hybrid dynamical systems theory, and aims to ease the burden of teleoperating high-dimensional assistive robots. By modeling this CPS as a hybrid dynamical system, assistance can be modeled as optimization over a desired cost function. The system's uncertainty over the user's goals can be modeled via a Partially Observable Markov Decision Processes. This model provides the natural scaffolding for learning user preferences. Through user studies, this project aims to address the following research questions: (Q1) Expense: How expensive is mode-switching? (Q2) Customization Need: Do we need to learn mode-switching from specific users? (Q3) Learning Assistance: How can we learn mode-switching paradigms from a user? (Q4) Goal Uncertainty: How should the assistance act under goal uncertainty? How will users respond? The proposal leverages the teams shared expertise in manipulation, algorithm development, and deploying real-world robotic systems. The proposal also leverages the teams complementary strengths on deploying advanced manipulation platforms, robotic motion planning and manipulation, and human-robot comanipulation, and on robot learning from human demonstration, control policy adaptation, and human rehabilitation. The proposed work targets the easier operation of robotic arms by severely paralyzed users. The need to control many degrees of freedom (DoF) gives rise to mode-switching during teleoperation. The switching itself can be cumbersome even with 2- and 3-axis joysticks, and becomes prohibitively so with more limited (1-D) interfaces. Easing the operation of switching not only lowers this burden on those already able to operate robotic arms, but may open use to populations to whom assistive robotic arms are currently inaccessible. This work is clearly synergistic: at the intersection of robotic manipulation, human rehabilitation, control theory, machine learning, human-robot interaction and clinical studies. The project addresses the science of CPS by developing new models of the interaction dynamics between the system and the user, the technology of CPS by developing new interfaces and interaction modalities with strong theoretical foundations, and the engineering of CPS by deploying our algorithms on real robot hardware and extensive studies with able-bodied and users with sprinal cord injuries.