Biblio
A significant amount of work is invested in human-machine teaming (HMT) across multiple fields. Accurately and effectively measuring system performance of an HMT is crucial for moving the design of these systems forward. Metrics are the enabling tools to devise a benchmark in any system and serve as an evaluation platform for assessing the performance, along with the verification and validation, of a system. Currently, there is no agreed-upon set of benchmark metrics for developing HMT systems. Therefore, identification and classification of common metrics are imperative to create a benchmark in the HMT field. The key focus of this review is to conduct a detailed survey aimed at identification of metrics employed in different segments of HMT and to determine the common metrics that can be used in the future to benchmark HMTs. We have organized this review as follows: identification of metrics used in HMTs until now, and classification based on functionality and measuring techniques. Additionally, we have also attempted to analyze all the identified metrics in detail while classifying them as theoretical, applied, real-time, non-real-time, measurable, and observable metrics. We conclude this review with a detailed analysis of the identified common metrics along with their usage to benchmark HMTs.
As designers conceive and implement what are commonly (but mistakenly) called autonomous systems, they adhere to certain myths of autonomy that are not only damaging in their own right, but also by their continued propagation. This article busts such myths and gives reasons why each of these myths should be called out and cast aside.
Alex Endert's dissertation "Semantic Interaction for Visual Analytics: Inferring Analytical Reasoning for Model Steering" described semantic interaction, a user interaction methodology for visual analytics (VA). It showed that user interaction embodies users' analytic process and can thus be mapped to model-steering functionality for "human-in-the-loop" system design. The dissertation contributed a framework (or pipeline) that describes such a process, a prototype VA system to test semantic interaction, and a user evaluation to demonstrate semantic interaction's impact on the analytic process. This research is influencing current VA research and has implications for future VA research.
We propose 10 challenges for making automation components into effective "team players" when they interact with people in significant ways. Our analysis is based on some of the principles of human-centered computing that we have developed individually and jointly over the years, and is adapted from a more comprehensive examination of common ground and coordination.