Biblio
Cyber attacks and the associated costs made cybersecurity a vital part of any system. User behavior and decisions are still a major part in the coping with these risks. We developed a model of optimal investment and human decisions with security measures, given that the effectiveness of each measure depends partly on the performance of the others. In an online experiment, participants classified events as malicious or non-malicious, based on the value of an observed variable. Prior to making the decisions, they had invested in three security measures - a firewall, an IDS or insurance. In three experimental conditions, maximal investment in only one of the measures was optimal, while in a fourth condition, participants should not have invested in any of the measures. A previous paper presents the analysis of the investment decisions. This paper reports users' classifications of events when interacting with these systems. The use of security mechanisms helped participants gain higher scores. Participants benefited in particular from purchasing IDS and/or Cyber Insurance. Participants also showed higher sensitivity and compliance with the alerting system when they could benefit from investing in the IDS. Participants, however, did not adjust their behavior optimally to the security settings they had chosen. The results demonstrate the complex nature of risk-related behaviors and the need to consider human abilities and biases when designing cyber security systems.
This research used an Autonomous Security Robot (ASR) scenario to examine public reactions to a robot that possesses the authority and capability to inflict harm on a human. Individual differences in terms of personality and Perfect Automation Schema (PAS) were examined as predictors of trust in the ASR. Participants (N=316) from Amazon Mechanical Turk (MTurk) rated their trust of the ASR and desire to use ASRs in public and military contexts following a 2-minute video depicting the robot interacting with three research confederates. The video showed the robot using force against one of the three confederates with a non-lethal device. Results demonstrated that individual differences factors were related to trust and desired use of the ASR. Agreeableness and both facets of the PAS (high expectations and all-or-none beliefs) demonstrated unique associations with trust using multiple regression techniques. Agreeableness, intellect, and high expectations were uniquely related to desired use for both public and military domains. This study showed that individual differences influence trust and one's desired use of ASRs, demonstrating that societal reactions to ASRs may be subject to variation among individuals.
With the recent advances in computing, artificial intelligence (AI) is quickly becoming a key component in the future of advanced applications. In one application in particular, AI has played a major role - that of revolutionizing traditional healthcare assistance. Using embodied interactive agents, or interactive robots, in healthcare scenarios has emerged as an innovative way to interact with patients. As an essential factor for interpersonal interaction, trust plays a crucial role in establishing and maintaining a patient-agent relationship. In this paper, we discuss a study related to healthcare in which we examine aspects of trust between humans and interactive robots during a therapy intervention in which the agent provides corrective feedback. A total of twenty participants were randomly assigned to receive corrective feedback from either a robotic agent or a human agent. Survey results indicate trust in a therapy intervention coupled with a robotic agent is comparable to that of trust in an intervention coupled with a human agent. Results also show a trend that the agent condition has a medium-sized effect on trust. In addition, we found that participants in the robot therapist condition are 3.5 times likely to have trust involved in their decision than the participants in the human therapist condition. These results indicate that the deployment of interactive robot agents in healthcare scenarios has the potential to maintain quality of health for future generations.
Trust is an important topic in medical human-robot interaction, since patients may be more fragile than other groups of people. This paper investigates the issue of users' trust when interacting with a rehabilitation robot. In the study, we investigate participants' heart rate and perception of safety in a scenario when their arm is led by the rehabilitation robot in two types of exercises at three different velocities. The participants' heart rate are measured during each exercise and the participants are asked how safe they feel after each exercise. The results showed that velocity and type of exercise has no significant influence on the participants' heart rate, but they do have significant influence on how safe they feel. We found that increasing velocity and longer exercises negatively influence participants' perception of safety.
Human-robot trust is crucial to successful human-robot interaction. We conducted a study with 798 participants distributed across 32 conditions using four dimensions of human-robot trust (reliable, capable, ethical, sincere) identified by the Multi-Dimensional-Measure of Trust (MDMT). We tested whether these dimensions can differentially capture gains and losses in human-robot trust across robot roles and contexts. Using a 4 scenario × 4 trust dimension × 2 change direction between-subjects design, we found the behavior change manipulation effective for each of the four subscales. However, the pattern of results best supported a two-dimensional conception of trust, with reliable-capable and ethical-sincere as the major constituents.
Insider threats pose a challenge to all companies and organizations. Identification of culprit after an attack is often too late and result in detrimental consequences for the organization. Majority of past research on insider threat has focused on post-hoc personality analysis of known insider threats to identify personality vulnerabilities. It has been proposed that certain personality vulnerabilities place individuals to be at risk to perpetuating insider threats should the environment and opportunity arise. To that end, this study utilizes a game-based approach to simulate a scenario of intellectual property theft and investigate behavioral and personality differences of individuals who exhibit insider-threat related behavior. Features were extracted from games, text collected through implicit and explicit measures, simultaneous facial expression recordings, and personality variables (HEXACO, Dark Triad and Entitlement Attitudes) calculated from questionnaire. We applied ensemble machine learning algorithms and show that they produce an acceptable balance of precision and recall. Our results showcase the possibility of harnessing personality variables, facial expressions and linguistic features in the modeling and prediction of insider-threat.
As demonstrated recently, Wireless Physical Layer Security (WPLS) has the potential to offer substantial advantages for key management for small resource-constrained and, therefore, low-cost IoT-devices, e.g., the widely applied 8-bit MCU 8051. In this paper, we present a WPLS testbed implementation for independent performance and security evaluations. The testbed is based on off-the-shelf hardware and utilizes the IEEE 802.15.4 communication standard for key extraction and secret key rate estimation in real-time. The testbed can include generically multiple transceivers to simulate legitimate parties or eavesdropper. We believe with the testbed we provide a first step to make experimental-based WPLS research results comparable. As an example, we present evaluation results of several test cases we performed, while for further information we refer to https://pls.rub.de.
High-accuracy localization is a prerequisite for many wireless applications. To obtain accurate location information, it is often required to share users' positional knowledge and this brings the risk of leaking location information to adversaries during the localization process. This paper develops a theory and algorithms for protecting location secrecy. In particular, we first introduce a location secrecy metric (LSM) for a general measurement model of an eavesdropper. Compared to previous work, the measurement model accounts for parameters such as channel conditions and time offsets in addition to the positions of users. We determine the expression of the LSM for typical scenarios and show how the LSM depends on the capability of an eavesdropper and the quality of the eavesdropper's measurement. Based on the insights gained from the analysis, we consider a case study in wireless localization network and develop an algorithm that diminish the eavesdropper's capabilities by exploiting the reciprocity of channels. Numerical results show that the proposed algorithm can effectively increase the LSM and protect location secrecy.
Cooperative spectrum sensing is often necessary in cognitive radios systems to localize a transmitter by fusing the measurements from multiple sensing radios. However, revealing spectrum sensing information also generally leaks information about the location of the radio that made those measurements. We propose a protocol for performing cooperative spectrum sensing while preserving the privacy of the sensing radios. In this protocol, radios fuse sensing information through a distributed particle filter based on a tree structure. All sensing information is encrypted using public-key cryptography, and one of the radios serves as an anonymizer, whose role is to break the connection between the sensing radios and the public keys they use. We consider a semi-honest (honest-but-curious) adversary model in which there is at most a single adversary that is internal to the sensing network and complies with the specified protocol but wishes to determine information about the other participants. Under this scenario, an adversary may learn the sensing information of some of the radios, but it does not have any way to tie that information to a particular radio's identity. We test the performance of our proposed distributed, tree-based particle filter using physical measurements of FM broadcast stations.
Data provenance provides a way for scientists to observe how experimental data originates, conveys process history, and explains influential factors such as experimental rationale and associated environmental factors from system metrics measured at runtime. The US Department of Energy Office of Science Integrated end-to-end Performance Prediction and Diagnosis for Extreme Scientific Workflows (IPPD) project has developed a provenance harvester that is capable of collecting observations from file based evidence typically produced by distributed applications. To achieve this, file based evidence is extracted and transformed into an intermediate data format inspired in part by W3C CSV on the Web recommendations, called the Harvester Provenance Application Interface (HAPI) syntax. This syntax provides a general means to pre-stage provenance into messages that are both human readable and capable of being written to a provenance store, Provenance Environment (ProvEn). HAPI is being applied to harvest provenance from climate ensemble runs for Accelerated Climate Modeling for Energy (ACME) project funded under the U.S. Department of Energy's Office of Biological and Environmental Research (BER) Earth System Modeling (ESM) program. ACME informally provides provenance in a native form through configuration files, directory structures, and log files that contain success/failure indicators, code traces, and performance measurements. Because of its generic format, HAPI is also being applied to harvest tabular job management provenance from Belle II DIRAC scheduler relational database tables as well as other scientific applications that log provenance related information.
An important topic in cybersecurity is validating Active Indicators (AI), which are stimuli that can be implemented in systems to trigger responses from individuals who might or might not be Insider Threats (ITs). The way in which a person responds to the AI is being validated for identifying a potential threat and a non-threat. In order to execute this validation process, it is important to create a paradigm that allows manipulation of AIs for measuring response. The scenarios are posed in a manner that require participants to be situationally aware that they are being monitored and have to act deceptively. In particular, manipulations in the environment should no differences between conditions relative to immersion and ease of use, but the narrative should be the driving force behind non-deceptive and IT responses. The success of the narrative and the simulation environment to induce such behaviors is determined by immersion, usability, and stress response questionnaires, and performance. Initial results of the feasibility to use a narrative reliant upon situation awareness of monitoring and evasion are discussed.
Standard classification procedures of both data mining and multivariate statistics are sensitive to the presence of outlying values. In this paper, we propose new algorithms for computing regularized versions of linear discriminant analysis for data with small sample sizes in each group. Further, we propose a highly robust version of a regularized linear discriminant analysis. The new method denoted as MWCD-L2-LDA is based on the idea of implicit weights assigned to individual observations, inspired by the minimum weighted covariance determinant estimator. Classification performance of the new method is illustrated on a detailed analysis of our pilot study of authentication methods on computers, using individual typing characteristics by means of keystroke dynamics.