Biblio
Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).
Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).
Automation reliability is an important factor that may affect human trust in automation, which has been shown to strongly influence the way the human operator interacts with the automated system. If the trust level is too low, the human operator may not utilize the automated system as expected; if the trust level is too high, the over-trust may lead to automation biases. In these cases, the overall system performance will be undermined --- after all, the ultimate goal of human-automation collaboration is to improve performance beyond what would be achieved with either alone. Most of the past research has manipulated the automation reliability through “experience”. That is, participants perform a certain task with an automated system that has a certain level of reliability (e.g., an automated warning system providing valid warnings 75% of the times). During or after the task, participants’ trust and reliance on the automated system is measured, as well as the performance. However, research has shown that participants’ perceived reliability usually differs from the actual reliability. In a real-world situation, it is very likely that the exact reliability can be described to the human operator (i.e., through “description”). A description-experience gap has been found robustly in human decision-making studies, according to which there are systematic differences between decisions made from description and decisions from experience. The current study examines the possible description-experience gap in the effect of automation reliability on human trust, reliance, and performance in the context of phishing. Specifically, the research investigates how the reliability of phishing warnings influences people's decisions about whether to proceed upon receiving the warning. The effect of the reliability of an automated phishing warning system is manipulated through experience with the system or through description of it. These two types of manipulations are directly compared, and the measures of interest are human trust in the warning (a subjective rating of how trustable the warning system is), human reliance on the automated system (an objective measure of whether the participants comply with the system’s warnings), and performance (the overall quality of the decisions made).
The successful operations of modern power grids are highly dependent on a reliable and ecient underlying communication network. Researchers and utilities have started to explore the opportunities and challenges of applying the emerging software-de ned networking (SDN) technology to enhance eciency and resilience of the Smart Grid. This trend calls for a simulation-based platform that provides sufcient exibility and controllability for evaluating network application designs, and facilitating the transitions from inhouse research ideas to real productions. In this paper, we present DSSnet, a hybrid testing platform that combines a power distribution system simulator with an SDN emulator to support high delity analysis of communication network applications and their impacts on the power systems. Our contributions lay in the design of a virtual time system with the tight controllability on the execution of the emulation system, i.e., pausing and resuming any speci ed container processes in the perception of their own virtual clocks, with little overhead scaling to 500 emulated hosts with an average of 70 ms overhead; and also lay in the ecient synchronization of the two sub-systems based on the virtual time. We evaluate the system performance of DSSnet, and also demonstrate the usability through a case study by evaluating a load shifting algorithm.
Best Poster Award, Illinois Institute of Technology Research Day, April 11, 2016.
Humans can easily find themselves in high cost situations where they must choose between suggestions made by an automated decision aid and a conflicting human decision aid. Previous research indicates that humans often rely on automation or other humans, but not both simultaneously. Expanding on previous work conducted by Lyons and Stokes (2012), the current experiment measures how trust in automated or human decision aids differs along with perceived risk and workload. The simulated task required 126 participants to choose the safest route for a military convoy; they were presented with conflicting information from an automated tool and a human. Results demonstrated that as workload increased, trust in automation decreased. As the perceived risk increased, trust in the human decision aid increased. Individual differences in dispositional trust correlated with an increased trust in both decision aids. These findings can be used to inform training programs for operators who may receive information from human and automated sources. Examples of this context include: air traffic control, aviation, and signals intelligence.
As our ground transportation infrastructure modernizes, the large amount of data being measured, transmitted, and stored motivates an analysis of the privacy aspect of these emerging cyber-physical technologies. In this paper, we consider privacy in the routing game, where the origins and destinations of drivers are considered private. This is motivated by the fact that this spatiotemporal information can easily be used as the basis for inferences for a person's activities. More specifically, we consider the differential privacy of the mapping from the amount of flow for each origin-destination pair to the traffic flow measurements on each link of a traffic network. We use a stochastic online learning framework for the population dynamics, which is known to converge to the Nash equilibrium of the routing game. We analyze the sensitivity of this process and provide theoretical guarantees on the convergence rates as well as differential privacy values for these models. We confirm these with simulations on a small example.
Intrusion detection systems (IDSs) assume increasingly importance in past decades as information systems become ubiquitous. Despite the abundance of intrusion detection algorithms developed so far, there is still no single detection algorithm or procedure that can catch all possible intrusions; also, simultaneously running all these algorithms may not be feasible for practical IDSs due to resource limitation. For these reasons, effective IDS configuration becomes crucial for real-time intrusion detection. However, the uncertainty in the intruder’s type and the (often unknown) dynamics involved with the target system pose challenges to IDS configuration. Considering these challenges, the IDS configuration problem is formulated as an incomplete information stochastic game in this work, and a new algorithm, Bayesian Nash-Q learning, that combines conventional reinforcement learning with a Bayesian type identification procedure is proposed. Numerical results show that the proposed algorithm can identify the intruder’s type with high fidelity and provide effective configuration.
We conducted an MTurk online user study to assess whether directing users to attend to the address bar and the use of domain highlighting lead to better performance at detecting fraudulent webpages. 320 participants were recruited to evaluate the trustworthiness of webpages (half authentic and half fraudulent) in two blocks. In the first block, participants were instructed to judge the webpage’s legitimacy by any information on the page. In the second block, they were directed specifically to look at the address bar. Webpages with domain highlighting were presented in the first block for half of the participants and in the second block for the remaining participants. Results showed that the participants could differentiate the legitimate and fraudulent webpages to a significant extent. When participants were directed to look at the address bar, correct decisions increased for fraudulent webpage s (“unsafe”) but did not change for authentic webpages (“safe”). The percentage of correct judgments showed no influence of domain highlighting regardless of whether the decisions were based on any information on the webpage or participants were directed to the address bar. The results suggest that directing users’ attention to the address bar is slightly beneficial at helping users detect phishing web pages, whereas domain highlighting gives almost no additional protection.
We study the problem of aggregator’s mechanism design for controlling the amount of active, or reactive, power provided, or consumed, by a group of distributed energy resources (DERs). The aggregator interacts with the wholesale electricity market and through some market-clearing mechanism is incentivized to provide (or consume) a certain amount of active (or reactive) power over some period of time, for which it will be compensated. The objective is for the aggregator to design a pricing strategy for incentivizing DERs to modify their active (or reactive) power consumptions (or productions) so that they collectively provide the amount that the aggregator has agreed to provide. The aggregator and DERs’ strategic decision-making process can be cast as a Stackelberg game, in which aggregator acts as the leader and the DERs are the followers. In previous work [Gharesifard et al., 2013b,a], we have introduced a framework in which each DER uses the pricing information provided by the aggregator and some estimate of the average energy that neighboring DERs can provide to compute a Nash equilibrium solution in a distributed manner. Here, we focus on the interplay between the aggregator’s decision-making process and the DERs’ decision-making process. In particular, we propose a simple feedback-based privacy-preserving pricing control strategy that allows the aggregator to coordinate the DERs so that they collectively provide the amount of active (or reactive) power agreed upon, provided that there is enough capacity available among the DERs. We provide a formal analysis of the stability of the resulting closed-loop system. We also discuss the shortcomings of the proposed pricing strategy, and propose some avenues of future work. We illustrate the proposed strategy via numerical simulations.
In distributed optimization and iterative consensus literature, a standard problem is for N agents to minimize a function f over a subset of Rn, where the cost function is expressed as Σ fi . In this paper, we study the private distributed optimization (PDOP) problem with the additional requirement that the cost function of the individual agents should remain differentially private. The adversary attempts to infer information about the private cost functions from the messages that the agents exchange. Achieving differential privacy requires that any change of an individual’s cost function only results in unsubstantial changes in the statistics of the messages. We propose a class of iterative algorithms for solving PDOP, which achieves differential privacy and convergence to the optimal value. Our analysis reveals the dependence of the achieved accuracy and the privacy levels on the the parameters of the algorithm.
The iterative consensus problem requires a set of processes or agents with different initial values, to interact and update their states to eventually converge to a common value. Pro- tocols solving iterative consensus serve as building blocks in a variety of systems where distributed coordination is re- quired for load balancing, data aggregation, sensor fusion, filtering, and synchronization. In this paper, we introduce the private iterative consensus problem where agents are re- quired to converge while protecting the privacy of their ini- tial values from honest but curious adversaries. Protecting the initial states, in many applications, suffice to protect all subsequent states of the individual participants.
We adapt the notion of differential privacy in this setting of iterative computation. Next, we present (i) a server-based and (ii) a completely distributed randomized mechanism for solving differentially private iterative consensus with adver- saries who can observe the messages as well as the internal states of the server and a subset of the clients. Our analysis establishes the tradeoff between privacy and the accuracy.
Many current VM monitoring approaches require guest OS modifications and are also unable to perform application level monitoring, reducing their value in a cloud setting. This paper introduces hprobes, a framework that allows one to dynamically monitor applications and operating systems inside a VM. The hprobe framework does not require any changes to the guest OS, which avoids the tight coupling of monitoring with its target. Furthermore, the monitors can be customized and enabled/disabled while the VM is running. To demonstrate the usefulness of this framework, we present three sample detectors: an emergency detector for a security vulnerability, an application watchdog, and an infinite-loop detector. We test our detectors on real applications and demonstrate that those detectors achieve an acceptable level of performance overhead with a high degree of flexibility.
Deadlock freedom is a key challenge in the design of communication networks. Wormhole switching is a popular switching technique, which is also prone to deadlocks. Deadlock analysis of routing functions is a manual and complex task. We propose an algorithm that automatically proves routing functions deadlock-free or outputs a minimal counter-example explaining the source of the deadlock. Our algorithm is the first to automatically check a necessary and sufficient condition for deadlock-free routing. We illustrate its efficiency in a complex adaptive routing function for torus topologies. Results are encouraging. Deciding deadlock freedom is co-NP-Complete for wormhole networks. Nevertheless, our tool proves a 13 × 13 torus deadlock-free within seconds. Finding minimal deadlocks is more difficult. Our tool needs four minutes to find a minimal deadlock in a 11 × 11 torus while it needs nine hours for a 12 × 12 network.
Memristors are an attractive option for use in future memory architectures due to their non-volatility, high density and low power operation. Notwithstanding these advantages, memristors and memristor-based memories are prone to high defect densities due to the non-deterministic nature of nanoscale fabrication. The typical approach to fault detection and diagnosis in memories entails testing one memory cell at a time. This is time consuming and does not scale for the dense, memristor-based memories. In this paper, we integrate solutions for detecting and locating faults in memristors, and ensure post-silicon recovery from memristor failures. We propose a hybrid diagnosis scheme that exploits sneak-paths inherent in crossbar memories, and uses March testing to test and diagnose multiple memory cells simultaneously, thereby reducing test time. We also provide a repair mechanism that prevents faults in the memory from being activated. The proposed schemes enable and leverage sneak paths during fault detection and diagnosis modes, while still maintaining a sneak-path free crossbar during normal operation. The proposed hybrid scheme reduces fault detection and diagnosis time by ~44%, compared to traditional March tests, and repairs the faulty cell with minimal overhead.
Defects cluster, and the probability of a multiple fault is significantly higher than just the product of the single fault probabilities. While this observation is beneficial for high yield, it complicates fault diagnosis. Multiple faults will occur especially often during process learning, yield ramp-up and field return analysis. In this paper, a logic diagnosis algorithm is presented which is robust against multiple faults and which is able to diagnose multiple faults with high accuracy even on compressed test responses as they are produced in embedded test and built-in self-test. The developed solution takes advantage of the linear properties of a MISR compactor to identify a set of faults likely to produce the observed faulty signatures. Experimental results show an improvement in accuracy of up to 22 % over traditional logic diagnosis solutions suitable for comparable compaction ratios.
Mobile Apps running on smartphones and tablet pes offer a new possibility to enhance the work of engineers because they provide an easy-to-use, touchscreen-based handling and can be used anytime and anywhere. Introducing mobile apps in the engineering domain is difficult because the IT environment is heterogeneous and engineering-specific challenges in the app development arise e. g., large amount of data and high security requirements. There is a need for an engineering-specific middleware to facilitate and standardize the app development. However, such a middleware does not yet exist as well as a holistic set of requirements for the development. Therefore, we propose a design method which offers a systematic procedure to develop Mobile Engineering-Application Middleware.
Data mining is the process of finding correlations in the relational databases. There are different techniques for identifying malicious database transactions. Many existing approaches which profile is SQL query structures and database user activities to detect intrusion, the log mining approach is the automatic discovery for identifying anomalous database transactions. Mining of the Data is very helpful to end users for extracting useful business information from large database. Multi-level and multi-dimensional data mining are employed to discover data item dependency rules, data sequence rules, domain dependency rules, and domain sequence rules from the database log containing legitimate transactions. Database transactions that do not comply with the rules are identified as malicious transactions. The log mining approach can achieve desired true and false positive rates when the confidence and support are set up appropriately. The implemented system incrementally maintain the data dependency rule sets and optimize the performance of the intrusion detection process.
This paper proposes a new network-based cyber intrusion detection system (NIDS) using multicast messages in substation automation systems (SASs). The proposed network-based intrusion detection system monitors anomalies and malicious activities of multicast messages based on IEC 61850, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Value (SV). NIDS detects anomalies and intrusions that violate predefined security rules using a specification-based algorithm. The performance test has been conducted for different cyber intrusion scenarios (e.g., packet modification, replay and denial-of-service attacks) using a cyber security testbed. The IEEE 39-bus system model has been used for testing of the proposed intrusion detection method for simultaneous cyber attacks. The false negative ratio (FNR) is the number of misclassified abnormal packets divided by the total number of abnormal packets. The results demonstrate that the proposed NIDS achieves a low fault negative rate.
The number of vulnerabilities and reported attacks on Web systems are showing increasing trends, which clearly illustrate the need for better understanding of malicious cyber activities. In this paper we use clustering to classify attacker activities aimed at Web systems. The empirical analysis is based on four datasets, each in duration of several months, collected by high-interaction honey pots. The results show that behavioral clustering analysis can be used to distinguish between attack sessions and vulnerability scan sessions. However, the performance heavily depends on the dataset. Furthermore, the results show that attacks differ from vulnerability scans in a small number of features (i.e., session characteristics). Specifically, for each dataset, the best feature selection method (in terms of the high probability of detection and low probability of false alarm) selects only three features and results into three to four clusters, significantly improving the performance of clustering compared to the case when all features are used. The best subset of features and the extent of the improvement, however, also depend on the dataset.
The EIIM model for ER allows for creation and maintenance of persistent entity identity structures. It accomplishes this through a collection of batch configurations that allow updates and asserted fixes to be made to the Identity knowledgebase (IKB). The model also provides a batch IR configuration that provides no maintenance activity but instead allows access to the identity information. This batch IR configuration is limited in a few ways. It is driven by the same rules used for maintaining the IKB, has no inherent method to identity "close" matches, and can only identify and return the positive matches. Through the decoupling of this configuration and its movements into an interactive role under the umbrella of an Identity Management Service, a more robust access method can be provided for the use of identity information. This more robust access to the information improved the quality of the information along multiple Information Quality dimensions.
Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.
Descriptors such as local binary patterns perform well for face recognition. Searching large databases using such descriptors has been problematic due to the cost of the linear search, and the inadequate performance of existing indexing methods. We present Discrete Cosine Transform (DCT) hashing for creating index structures for face descriptors. Hashes play the role of keywords: an index is created, and queried to find the images most similar to the query image. Common hash suppression is used to improve retrieval efficiency and accuracy. Results are shown on a combination of six publicly available face databases (LFW, FERET, FEI, BioID, Multi-PIE, and RaFD). It is shown that DCT hashing has significantly better retrieval accuracy and it is more efficient compared to other popular state-of-the-art hash algorithms.
Detecting hardware Trojan is a difficult task in general. The context is that of a fabless design house that sells IP blocks as GDSII hard macros, and wants to check that final products have not been infected by Trojan during the foundry stage. In this paper we analyzed hardware Trojan horses insertion and detection in Scalable Encryption Algorithm (SEA) crypto. We inserted Trojan at different levels in the ASIC design flow of SEA crypto and most importantly we focused on Gate level and layout level Trojan insertions. We choose path delays in order to detect Trojan at both levels in design phase. Because the path delays detection technique is cost effective and efficient method to detect Trojan. The comparison of path delays makes small Trojan circuits significant from a delay point of view. We used typical, fast and slow 90nm libraries in order to estimate the efficiency of path delay technique in different operating conditions. The experiment's results show that the detection rate on payload Trojan is 100%.