Biblio
Trust is an important characteristic of successful interactions between humans and agents in many scenarios. Self-driving scenarios are of particular relevance when discussing the issue of trust due to the high-risk nature of erroneous decisions being made. The present study aims to investigate decision-making and aspects of trust in a realistic driving scenario in which an autonomous agent provides guidance to humans. To this end, a simulated driving environment based on a college campus was developed and presented. An online and an in-person experiment were conducted to examine the impacts of mistakes made by the self-driving AI agent on participants’ decisions and trust. During the experiments, participants were asked to complete a series of driving tasks and make a sequence of decisions in a time-limited situation. Behavior analysis indicated a similar relative trend in the decisions across these two experiments. Survey results revealed that a mistake made by the self-driving AI agent at the beginning had a significant impact on participants’ trust. In addition, similar overall experience and feelings across the two experimental conditions were reported. The findings in this study add to our understanding of trust in human-robot interaction scenarios and provide valuable insights for future research work in the field of human-robot trust.
Trust is a key element for successful human-robot interaction. One challenging problem in this domain is the issue of how to construct a formulation that optimally models this trust phenomenon. This paper presents a framework for modeling human-robot trust based on representing the human decision-making process as a formulation based on trust states. Using this formulation, we then discuss a generalized model of human-robot trust based on Hidden Markov Models and Logistic Regression. The proposed approach is validated on datasets collected from two different human subject studies in which the human is provided the ability to take advice from a robot. Both experimental scenarios were time-sensitive, in that a decision had to be made by the human in a limited time period, but each scenario featured different levels of cognitive load. The experimental results demonstrate that the proposed formulation can be utilized to model trust, in which the system can predict whether the human will decide to take advice (or not) from the robot. It was found that our prediction performance degrades after the robot made a mistake. The validation of this approach on two scenarios implies that this model can be applied to other interactive scenarios as long as the interaction dynamics fits into the proposed formulation. Directions for future improvements are discussed.
Resource scheduling in a computing system addresses the problem of packing tasks with multi-dimensional resource requirements and non-functional constraints. The exhibited heterogeneity of workload and server characteristics in Cloud-scale or Internet-scale systems is adding further complexity and new challenges to the problem. Compared with,,,, existing solutions based on ad-hoc heuristics, Machine Learning (ML) has the potential to improve further the efficiency of resource management in large-scale systems. In this paper we,,,, will describe and discuss how ML could be used to understand automatically both workloads and environments, and to help to cope with scheduling-related challenges such as consolidating co-located workloads, handling resource requests, guaranteeing application's QoSs, and mitigating tailed stragglers. We will introduce a generalized ML-based solution to large-scale resource scheduling and demonstrate its effectiveness through a case study that deals with performance-centric node classification and straggler mitigation. We believe that an MLbased method will help to achieve architectural optimization and efficiency improvement.
With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.
With recent advances in robotics, it is expected that robots will become increasingly common in human environments, such as in the home and workplaces. Robots will assist and collaborate with humans on a variety of tasks. During these collaborations, it is inevitable that disagreements in decisions would occur between humans and robots. Among factors that lead to which decision a human should ultimately follow, theirs or the robot, trust is a critical factor to consider. This study aims to investigate individuals' behaviors and aspects of trust in a problem-solving situation in which a decision must be made in a bounded amount of time. A between-subject experiment was conducted with 100 participants. With the assistance of a humanoid robot, participants were requested to tackle a cognitive-based task within a given time frame. Each participant was randomly assigned to one of the following initial conditions: 1) a working robot in which the robot provided a correct answer or 2) a faulty robot in which the robot provided an incorrect answer. Impacts of the faulty robot behavior on participant's decision to follow the robot's suggested answer were analyzed. Survey responses about trust were collected after interacting with the robot. Results indicated that the first impression has a significant impact on participant's behavior of trusting a robot's advice during a disagreement. In addition, this study discovered evidence supporting that individuals still have trust in a malfunctioning robot even after they have observed a robot's faulty behavior.
This Innovate Practice Full Paper describes our experience with teaching cybersecurity topics using guided inquiry collaborative learning. The goal is to not only develop the students' in-depth technical knowledge, but also “soft skills” such as communication, attitude, team work, networking, problem-solving and critical thinking. This paper reports our experience with developing and using the Guided Inquiry Collaborative Learning materials on the topics of firewall and IPsec. Pre- and post-surveys were conducted to access the effectiveness of the developed materials and teaching methods in terms of learning outcome, attitudes, learning experience and motivation. Analysis of the survey data shows that students had increased learning outcome, participation in class, and interest with Guided Inquiry Collaborative Learning.
Although Stylometry has been effectively used for Authorship Attribution, there is a growing number of methods being developed that allow authors to mask their identity [2, 13]. In this paper, we investigate the usage of non-traditional feature sets for Authorship Attribution. By using non-traditional feature sets, one may be able to reveal the identity of adversarial authors who are attempting to evade detection from Authorship Attribution systems that are based on more traditional feature sets. In addition, we demonstrate how GEFeS (Genetic & Evolutionary Feature Selection) can be used to evolve high-performance hybrid feature sets composed of two non-traditional feature sets for Authorship Attribution: LIWC (Linguistic Inquiry & Word Count) and Sentiment Analysis. These hybrids were able to reduce the Adversarial Effectiveness on a test set presented in [2] by approximately 33.4%.
We are currently witnessing the development of increasingly effective author identification systems (AISs) that have the potential to track users across the internet based on their writing style. In this paper, we discuss two methods for providing user anonymity with respect to writing style: Adversarial Stylometry and Adversarial Authorship. With Adversarial Stylometry, a user attempts to obfuscate their writing style by consciously altering it. With Adversarial Authorship, a user can select an author cluster target (ACT) and write toward this target with the intention of subverting an AIS so that the user's writing sample will be misclassified Our results show that Adversarial Authorship via interactive evolutionary hill-climbing outperforms Adversarial Stylometry.
Spatial information network is an important part of the integrated space-terrestrial information network, its bearer services are becoming increasingly complex, and real-time requirements are also rising. Due to the structural vulnerability of the spatial information network and the dynamics of the network, this poses a serious challenge to how to ensure reliable and stable data transmission. The structural vulnerability of the spatial information network and the dynamics of the network brings a serious challenge of ensuring reliable and stable data transmission. Software Defined Networking (SDN), as a new network architecture, not only can quickly adapt to new business, but also make network reconfiguration more intelligent. In this paper, SDN is used to design the spatial information network architecture. An optimization algorithm for network self-healing based on SDN is proposed to solve the failure of switching node. With the guarantee of Quality of Service (QoS) requirement, the link is updated with the least link to realize the fast network reconfiguration and recovery. The simulation results show that the algorithm proposed in this paper can effectively reduce the delay caused by fault recovery.