Visible to the public Biblio

Filters: Keyword is robot navigation  [Clear All Filters]
2021-05-26
Boursinos, Dimitrios, Koutsoukos, Xenofon.  2020.  Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems. 2020 IEEE Security and Privacy Workshops (SPW). :228—233.

Cyber-physical systems (CPS) can benefit by the use of learning enabled components (LECs) such as deep neural networks (DNNs) for perception and decision making tasks. However, DNNs are typically non-transparent making reasoning about their predictions very difficult, and hence their application to safety-critical systems is very challenging. LECs could be integrated easier into CPS if their predictions could be complemented with a confidence measure that quantifies how much we trust their output. The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP). We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set. Then, these representations are used to estimate the confidence of set predictions from a classifier that is based on the neural network architecture used in the triplet. The approach is evaluated using a robotic navigation benchmark and the results show that we can computed trusted confidence bounds efficiently in real-time.

2019-03-04
Moolchandani, Pooja, Hayes, Cory J., Marge, Matthew.  2018.  Evaluating Robot Behavior in Response to Natural Language. Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. :197–198.
Human-robot teaming can be improved if a robot»s actions meet human users» expectations. The goal of this research is to determine what variations of robot actions in response to natural language match human judges» expectations in a series of tasks. We conducted a study with 21 volunteers that analyzed how a virtual robot behaved when executing eight navigation instructions from a corpus of human-robot dialogue. Initial findings suggest that movement more accurately meets human expectation when the robot (1) navigates with an awareness of its environment and (2) demonstrates a sense of self-safety.