Biblio

Filters: Author is Nam, C.  [Clear All Filters]
2020-12-01
Nam, C., Li, H., Li, S., Lewis, M., Sycara, K..  2018.  Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :825—830.

In this paper, we study trust-related human factors in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We compare three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual LOA, the human operator chooses headings for a flocking swarm, issuing new headings as needed. In the fully autonomous LOA, the swarm is redirected automatically by changing headings using a search algorithm. In the mixed-initiative LOA, if performance declines, control is switched from human to swarm or swarm to human. The result of this work extends the current knowledge on human factors in swarm supervisory control. Specifically, the finding that the relationship between trust and performance improved for passively monitoring operators (i.e., improved situation awareness in higher LOAs) is particularly novel in its contradiction of earlier work. We also discover that operators switch the degree of autonomy when their trust in the swarm system is low. Last, our analysis shows that operator's preference for a lower LOA is confirmed for a new domain of swarm control.

2018-02-14
Nam, C., Walker, P., Lewis, M., Sycara, K..  2017.  Predicting trust in human control of swarms via inverse reinforcement learning. 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :528–533.
In this paper, we study the model of human trust where an operator controls a robotic swarm remotely for a search mission. Existing trust models in human-in-the-loop systems are based on task performance of robots. However, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since task performance of swarms is not clearly perceivable by humans. We formulate trust as a Markov decision process whose state space includes physical parameters of the swarm. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from a single demonstration. The learned behaviors are used to predict the trust level of the operator based on the features of the swarm.