Learning Control Sharing Strategies for Assistive Cyber-Physical Systems
Assistive machines such as robotic arms and powered wheelchairs promote independence and ability in those with severe motor impairments. As the field of assistive robotics progresses rapidly, these devices are becoming more capable and dextrous and as a result, higher dimensional and harder to control. The dimensionality mismatch between high-dimensional robots and low-dimensional control interfaces requires the control space to be partitioned into control modes. For full control of the robot the user switches between these partitions and this is known as mode switching. Mode switching adds to the cognitive workload and degrades task performance. Shared autonomy helps to alleviate some of the task burden by letting the robot take partial responsibility of task execution. In our work we a) identified control modes that will elicit more informative control commands from the human which will help the robot to perform more accurate intent inference. b) conducted an eight person subject study to evaluate the efficacy of the disambiguation algorithm c) developed a novel intent inference scheme inspired by dynamic neural fields and d) explored information theoretic ideas based on entropy-based and KL-divergence for intent disambiguation. Our results suggest that (a) the disambiguation system has a greater utility value as the control interface becomes more limited and the task becomes more complex, (b) subjects demonstrated a diverse range of disambiguation request behavior with a greater concentration in the earlier parts of the trial. Qualitative comparison of the dynamic neural field based intent inference approach to Bayesian approaches showed that our approach had similar inference accurate in different goal configurations
- PDF document
- 1.13 MB
- 23 downloads
- Download
- PDF version
- Printer-friendly version