Biblio

Filters: Author is Lewis, M.  [Clear All Filters]
2021-03-29
Guo, Y., Wang, B., Hughes, D., Lewis, M., Sycara, K..  2020.  Designing Context-Sensitive Norm Inverse Reinforcement Learning Framework for Norm-Compliant Autonomous Agents. 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN). :618—625.

Human behaviors are often prohibited, or permitted by social norms. Therefore, if autonomous agents interact with humans, they also need to reason about various legal rules, social and ethical social norms, so they would be trusted and accepted by humans. Inverse Reinforcement Learning (IRL) can be used for the autonomous agents to learn social norm-compliant behavior via expert demonstrations. However, norms are context-sensitive, i.e. different norms get activated in different contexts. For example, the privacy norm is activated for a domestic robot entering a bathroom where a person may be present, whereas it is not activated for the robot entering the kitchen. Representing various contexts in the state space of the robot, as well as getting expert demonstrations under all possible tasks and contexts is extremely challenging. Inspired by recent work on Modularized Normative MDP (MNMDP) and early work on context-sensitive RL, we propose a new IRL framework, Context-Sensitive Norm IRL (CNIRL). CNIRL treats states and contexts separately, and assumes that the expert determines the priority of every possible norm in the environment, where each norm is associated with a distinct reward function. The agent chooses the action to maximize its cumulative rewards. We present the CNIRL model and show that its computational complexity is scalable in the number of norms. We also show via two experimental scenarios that CNIRL can handle problems with changing context spaces.

2020-12-01
Nam, C., Li, H., Li, S., Lewis, M., Sycara, K..  2018.  Trust of Humans in Supervisory Control of Swarm Robots with Varied Levels of Autonomy. 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :825—830.

In this paper, we study trust-related human factors in supervisory control of swarm robots with varied levels of autonomy (LOA) in a target foraging task. We compare three LOAs: manual, mixed-initiative (MI), and fully autonomous LOA. In the manual LOA, the human operator chooses headings for a flocking swarm, issuing new headings as needed. In the fully autonomous LOA, the swarm is redirected automatically by changing headings using a search algorithm. In the mixed-initiative LOA, if performance declines, control is switched from human to swarm or swarm to human. The result of this work extends the current knowledge on human factors in swarm supervisory control. Specifically, the finding that the relationship between trust and performance improved for passively monitoring operators (i.e., improved situation awareness in higher LOAs) is particularly novel in its contradiction of earlier work. We also discover that operators switch the degree of autonomy when their trust in the swarm system is low. Last, our analysis shows that operator's preference for a lower LOA is confirmed for a new domain of swarm control.

2018-02-14
Nam, C., Walker, P., Lewis, M., Sycara, K..  2017.  Predicting trust in human control of swarms via inverse reinforcement learning. 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). :528–533.
In this paper, we study the model of human trust where an operator controls a robotic swarm remotely for a search mission. Existing trust models in human-in-the-loop systems are based on task performance of robots. However, we find that humans tend to make their decisions based on physical characteristics of the swarm rather than its performance since task performance of swarms is not clearly perceivable by humans. We formulate trust as a Markov decision process whose state space includes physical parameters of the swarm. We employ an inverse reinforcement learning algorithm to learn behaviors of the operator from a single demonstration. The learned behaviors are used to predict the trust level of the operator based on the features of the swarm.