Biblio
With limited battery supply, power is a scarce commodity in wireless sensor networks. Thus, to prolong the lifetime of the network, it is imperative that the sensor resources are managed effectively. This task is particularly challenging in heterogeneous sensor networks for which decisions and compromises regarding sensing strategies are to be made under time and resource constraints. In such networks, a sensor has to reason about its current state to take actions that are deemed appropriate with respect to its mission, its energy reserve, and the survivability of the overall network. Sensor Management controls and coordinates the use of the sensory suites in a manner that maximizes the success rate of the system in achieving its missions. This article focuses on formulating and developing an autonomous energy-aware sensor management system that strives to achieve network objectives while maximizing its lifetime. A team-theoretic formulation based on the Belief-Desire-Intention (BDI) model and the Joint Intention theory is proposed as a mechanism for effective and energy-aware collaborative decision-making. The proposed system models the collective behavior of the sensor nodes using the Joint Intention theory to enhance sensors’ collaboration and success rate. Moreover, the BDI modeling of the sensor operation and reasoning allows a sensor node to adapt to the environment dynamics, situation-criticality level, and availability of its own resources. The simulation scenario selected in this work is the surveillance of the Waterloo International Airport. Various experiments are conducted to investigate the effect of varying the network size, number of threats, threat agility, environment dynamism, as well as tracking quality and energy consumption, on the performance of the proposed system. The experimental results demonstrate the merits of the proposed approach compared to the state-of-the-art centralized approach adapted from Atia et al. [2011] and the localized approach in Hilal and Basir [2015] in terms of energy consumption, adaptability, and network lifetime. The results show that the proposed approach has 12 × less energy consumption than that of the popular centralized approach.
This paper proposes a taxonomy of autonomous vehicle handover situations with a particular emphasis on situational awareness. It focuses on a number of research challenges such as: legal responsibility, the situational awareness level of the driver and the vehicle, the knowledge the vehicle must have of the driver's driving skills as well as the in-vehicle context. The taxonomy acts as a starting point for researchers and practitioners to frame the discussion on this complex problem.
We propose PADA, a new power evaluation tool to measure and optimize power use of mobile sensing applications. Our motivational study with 53 professional developers shows they face huge challenges in meeting power requirements. The key challenges are from the significant time and effort for repetitive power measurements since the power use of sensing applications needs to be evaluated under various real-world usage scenarios and sensing parameters. PADA enables developers to obtain enriched power information under diverse usage scenarios in development environments without deploying and testing applications on real phones in real-life situations. We conducted two user studies with 19 developers to evaluate the usability of PADA. We show that developers benefit from using PADA in the implementation and power tuning of mobile sensing applications.
In international military coalitions, situation awareness is achieved by gathering critical intel from different authorities. Authorities want to retain control over their data, as they are sensitive by nature, and, thus, usually employ their own authorization solutions to regulate access to them. In this paper, we highlight that harmonizing authorization solutions at the coalition level raises many challenges. We demonstrate how we address authorization challenges in the context of a scenario defined by military experts using a prototype implementation of SAFAX, an XACML-based architectural framework tailored to the development of authorization services for distributed systems.
We propose a new security paradigm that makes cross-layer personalization a premier component in the design of security solutions for computer infrastructure and situational awareness. This paradigm is based on the observation that computer systems have a personalized usage profile that depends on the user and his activities. Further, it spans the various layers of abstraction that make up a computer system, as if the user embedded his own DNA into the computer system. To realize such a paradigm, we discuss the design of a comprehensive and cross-layer profiling approach, which can be adopted to boost the effectiveness of various security solutions, e.g., malware detection, insider attacker prevention and continuous authentication. The current state-of-the-art in computer infrastructure defense solutions focuses on one layer of operation with deployments coming in a "one size fits all" format, without taking into account the unique way people use their computers. The key novelty of our proposal is the cross-layer personalization, where we derive the distinguishable behaviors from the intelligence of three layers of abstraction. First, we combine intelligence from: a) the user layer, (e.g., mouse click patterns); b) the operating system layer; c) the network layer. Second, we develop cross-layer personalized profiles for system usage. We will limit our scope to companies and organizations, where computers are used in a more routine and one-on-one style, before we expand our research to personally owned computers. Our preliminary results show that just the time accesses in user web logs are already sufficient to distinguish users from each other,with users of the same demographics showing similarities in their profiles. Our goal is to challenge today's paradigm for anomaly detection that seems to follow a monoculture and treat each layer in isolation. We also discuss deployment, performance overhead, and privacy issues raised by our paradigm.
Recent advances in vehicle automation have led to excitement and discourse in academia, industry, the media, and the public. Human factors such as trust and user experience are critical in terms of safety and customer acceptance. One of the main challenges in partial and conditional automation is related to drivers' situational awareness, or a lack thereof. In this paper, we critically analyse state of the art implementations in this arena and present a proactive approach to increasing situational awareness. We propose to make use of augmented reality to carefully design applications aimed at constructs such as amplification and voluntary attention. Finally, we showcase an example application, Pokémon DRIVE, that illustrates the utility of our proposed approach.
Research towards my dissertation has involved a series of perceptual and accessibility-focused studies concerned with the use of tactile cues for spatial and situational awareness, displayed through head-mounted wearables. These studies were informed by an initial participatory design study of mobile technology multitasking and tactile interaction habits. This research has yielded a number of actionable conclusions regarding the development of tactile interfaces for the head, and endeavors to provide greater insight into the design of advanced tactile alerting for contextual and spatial understanding in assistive applications (e.g. for individuals who are blind or those encountering situational impairments), as well as guidance for developers regarding assessment of interaction between under-utilized sensory modalities and underlying perceptual and cognitive processes.
This paper describes the challenges of converting the classic Pac-Man arcade game into a virtual reality game. Arcaid provides players with the tools to maintain sufficient situation awareness in an environment where, unlike the classic game, they do not have full view of the game state. We also illustrate methods that can be used to reduce a player's simulation sickness by providing visual focal points for players and designing user interface elements that do not disrupt immersion.
Opportunistic Situation Identification (OSI) is new paradigms for situation-aware systems, in which contexts for situation identification are sensed through sensors that happen to be available rather than pre-deployed and application-specific ones. OSI extends the application usage scale and reduces system costs. However, designing and implementing OSI module of situation-aware systems encounters several challenges, including the uncertainty of context availability, vulnerable network connectivity and privacy threat. This paper proposes a novel middleware framework to tackle such challenges, and its intuition is that it facilitates performing the situation reasoning locally on a smartphone without needing to rely on the cloud, thus reducing the dependency on the network and being more privacy-preserving. To realize such intuitions, we propose a hybrid learning approach to maximize the reasoning accuracy using limited phone's storage space, with the combination of two the-state-the-art techniques. Specifically, this paper provides a genetic algorithm based optimization approach to determine which pre-computed models will be selected for storage under the storage constraints. Validation of the approach based on an open dataset indicates that the proposed approach achieves higher accuracy with comparatively small storage cost. Further, the proposed utility function for model selection performs better than three baseline utility functions.
Distributed Denial of Service attacks against high-profile targets have become more frequent in recent years. In response to such massive attacks, several architectures have adopted proxies to introduce layers of indirection between end users and target services and reduce the impact of a DDoS attack by migrating users to new proxies and shuffling clients across proxies so as to isolate malicious clients. However, the reactive nature of these solutions presents weaknesses that we leveraged to develop a new attack - the proxy harvesting attack - which enables malicious clients to collect information about a large number of proxies before launching a DDoS attack. We show that current solutions are vulnerable to this attack, and propose a moving target defense technique consisting in periodically and proactively replacing one or more proxies and remapping clients to proxies. Our primary goal is to disrupt the attacker's reconnaissance effort. Additionally, to mitigate ongoing attacks, we propose a new client-to-proxy assignment strategy to isolate compromised clients, thereby reducing the impact of attacks. We validate our approach both theoretically and through simulation, and show that the proposed solution can effectively limit the number of proxies an attacker can discover and isolate malicious clients.
Compression is desirable for network applications as it saves bandwidth. Differently, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to the “Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH)” attack on web traffic protected by the TLS protocol. The general guidance to prevent this attack is to disable HTTP compression, preserving confidentiality but sacrificing bandwidth. As a more sophisticated countermeasure, fixed-dictionary compression was introduced in 2015 enabling compression while protecting high-value secrets, such as cookies, from attacks. The fixed-dictionary compression method is a cryptographically sound countermeasure against the BREACH attack, since it is proven secure in a suitable security model. In this project, we integrate the fixed-dictionary compression method as a countermeasure for BREACH attack, for real-world client-server setting. Further, we measure the performance of the fixed-dictionary compression algorithm against the DEFLATE compression algorithm. The results evident that, it is possible to save some amount of bandwidth, with reasonable compression/decompression time compared to DEFLATE operations. The countermeasure is easy to implement and deploy, hence, this would be a possible direction to mitigate the BREACH attack efficiently, rather than stripping off the HTTP compression entirely.
Industrial Control System (ICS) consists of large number of electronic devices connected to field devices to execute the physical processes. Communication network of ICS supports wide range of packet based applications. A growing issue with network security and its impact on ICS have highlighted some fundamental risks to critical infrastructure. To address network security issues for ICS a clear understanding of security specific defensive countermeasures is required. Reconnaissance of ICS network by deep packet inspection (DPI) consists analysis of the contents of the captured packets in order to get accurate measures of process that uses specific countermeasure to create an aggregated posture. In this paper we focus on novel approach by presenting a technique with captured network traffic. This technique is capable to identify the protocols and extract different features for classification of traffic based on network protocol, header information and payload to understand the whole architecture of complex system. Here we have segregated possible types of attacks on ICS.
The anonymizing network Tor is examined as one method of anonymizing port scanning tools and avoiding identification and retaliation. Performing anonymized port scans through Tor is possible using Nmap, but parallelization of the scanning processes is required to accelerate the scan rate.
The American National Standards Institute (ANSI) has standardized an access control approach, Next Generation Access Control (NGAC), that enables simultaneous instantiation of multiple access control policies. For large complex enterprises this is critical to limiting the authorized access of insiders. However, the specifications describe the required access control capabilities but not the related algorithms. While appropriate, this leave open the important question as to whether or not NGAC is scalable. Existing cubic reference implementations indicate that it does not. For example, the primary NGAC reference implementation took several minutes to simply display the set of files accessible to a user on a moderately sized system. To solve this problem we provide an efficient access control decision algorithm, reducing the overall complexity from cubic to linear. Our other major contribution is to provide a novel mechanism for administrators and users to review allowed access rights. We provide an interface that appears to be a simple file directory hierarchy but in reality is an automatically generated structure abstracted from the underlying access control graph that works with any set of simultaneously instantiated access control policies. Our work thus provides the first efficient implementation of NGAC while enabling user privilege review through a novel visualization approach. These capabilities help limit insider access to information (and thereby limit information leakage) by enabling the efficient simultaneous instantiation of multiple access control policies.
Threat classification is extremely important for individuals and organizations, as it is an important step towards realization of information security. In fact, with the progress of information technologies (IT) security becomes a major challenge for organizations which are vulnerable to many types of insiders and outsiders security threats. The paper deals with threats classification models in order to help managers to define threat characteristics and then protect their assets from them. Existing threats classification models are non complete and present non orthogonal threats classes. The aim of this paper is to suggest a scalable and complete approach that classifies security threat in orthogonal way.
In this paper, we analyze manipulation methods of the MAC address and consequent security threats. The Ethernet MAC address is known to be unchanged, and so is highly considered as platform-unique information. For this reason, various services are researched using the MAC address. These kinds of services are organized with MAC address as plat- form identifier or a password, and such a diverse range of security threats are caused when the MAC address is manipulated. Therefore, here we research on manipulation methods for MAC address at different levels on a computing platform and highlight the security threats resulted from modification of the MAC address. In this paper, we introduce manipulation methods on the original MAC address stored in the EEPROM on NIC (Network Interface Card) as hardware- based MAC spoofing attack, which are unknown to be general approaches. This means that the related services should struggle to detect the falsification and the results of this paper have deep significance in most MAC address-based services.
The threat that malicious insiders pose towards organisations is a significant problem. In this paper, we investigate the task of detecting such insiders through a novel method of modelling a user's normal behaviour in order to detect anomalies in that behaviour which may be indicative of an attack. Specifically, we make use of Hidden Markov Models to learn what constitutes normal behaviour, and then use them to detect significant deviations from that behaviour. Our results show that this approach is indeed successful at detecting insider threats, and in particular is able to accurately learn a user's behaviour. These initial tests improve on existing research and may provide a useful approach in addressing this part of the insider-threat challenge.
Since the number of cyber attacks by insider threats and the damage caused by them has been increasing over the last years, organizations are in need for specific security solutions to counter these threats. To limit the damage caused by insider threats, the timely detection of erratic system behavior and malicious activities is of primary importance. We observed a major paradigm shift towards anomaly-focused detection mechanisms, which try to establish a baseline of system behavior – based on system logging data – and report any deviations from this baseline. While these approaches are promising, they usually have to cope with scalability issues. As the amount of log data generated during IT operations is exponentially growing, high-performance security solutions are required that can handle this huge amount of data in real time. In this paper, we demonstrate how high-performance bioinformatics tools can be leveraged to tackle this issue, and we demonstrate their application to log data for outlier detection, to timely detect anomalous system behavior that points to insider attacks.
Advanced targeted cyber attacks often rely on reconnaissance missions to gather information about potential targets and their location in a networked environment to identify vulnerabilities which can be exploited for further attack maneuvers. Advanced network scanning techniques are often used for this purpose and are automatically executed by malware infected hosts. In this paper we formally define network deception to defend reconnaissance and develop RDS (Reconnaissance Deception System), which is based on SDN (Software Defined Networking), to achieve deception by simulating virtual network topologies. Our system thwarts network reconnaissance by delaying the scanning techniques of adversaries and invalidating their collected information, while minimizing the performance impact on benign network traffic. We introduce approaches to defend malicious network discovery and reconnaissance in computer networks, which are required for targeted cyber attacks such as Advanced Persistent Threats (APT). We show, that our system is able to invalidate an attackers information, delay the process of finding vulnerable hosts and identify the source of adversarial reconnaissance within a network, while only causing a minuscule performance overhead of 0.2 milliseconds per packet flow on average.
Embedded systems are becoming increasingly complex as designers integrate different functionalities into a single application for execution on heterogeneous hardware platforms. In this work we propose a system-level security approach in order to provide isolation of tasks without the need to trust a central authority at run-time. We discuss security requirements that can be found in complex embedded systems that use heterogeneous execution platforms, and by regulating memory access we create mechanisms that allow safe use of shared IP with direct memory access, as well as shared libraries. We also present a prototype Isolation Unit that checks memory transactions and allows for dynamic configuration of permissions.