Biblio
With limited battery supply, power is a scarce commodity in wireless sensor networks. Thus, to prolong the lifetime of the network, it is imperative that the sensor resources are managed effectively. This task is particularly challenging in heterogeneous sensor networks for which decisions and compromises regarding sensing strategies are to be made under time and resource constraints. In such networks, a sensor has to reason about its current state to take actions that are deemed appropriate with respect to its mission, its energy reserve, and the survivability of the overall network. Sensor Management controls and coordinates the use of the sensory suites in a manner that maximizes the success rate of the system in achieving its missions. This article focuses on formulating and developing an autonomous energy-aware sensor management system that strives to achieve network objectives while maximizing its lifetime. A team-theoretic formulation based on the Belief-Desire-Intention (BDI) model and the Joint Intention theory is proposed as a mechanism for effective and energy-aware collaborative decision-making. The proposed system models the collective behavior of the sensor nodes using the Joint Intention theory to enhance sensors’ collaboration and success rate. Moreover, the BDI modeling of the sensor operation and reasoning allows a sensor node to adapt to the environment dynamics, situation-criticality level, and availability of its own resources. The simulation scenario selected in this work is the surveillance of the Waterloo International Airport. Various experiments are conducted to investigate the effect of varying the network size, number of threats, threat agility, environment dynamism, as well as tracking quality and energy consumption, on the performance of the proposed system. The experimental results demonstrate the merits of the proposed approach compared to the state-of-the-art centralized approach adapted from Atia et al. [2011] and the localized approach in Hilal and Basir [2015] in terms of energy consumption, adaptability, and network lifetime. The results show that the proposed approach has 12 × less energy consumption than that of the popular centralized approach.
Cloud computation has become prominent with seemingly unlimited amount of storage and computation available to users. Yet, security is a major issue that hampers the growth of cloud. In this research we investigate a collaborative Intrusion Detection System (IDS) based on the ensemble learning method. It uses weak classifiers, and allows the use of untapped resources of cloud to detect various types of attacks on the cloud system. In the proposed system, tasks are distributed among available virtual machines (VM), individual results are then merged for the final adaptation of the learning model. Performance evaluation is carried out using decision trees and using fuzzy classifiers, on KDD99, one of the largest datasets for IDS. Segmentation of the dataset is done in order to mimic the behavior of real-time data traffic occurred in a real cloud environment. The experimental results show that the proposed approach reduces the execution time with improved accuracy, and is fault-tolerant when handling VM failures. The system is a proof-of-concept model for a scalable, cloud-based distributed system that is able to explore untapped resources, and may be used as a base model for a real-time hierarchical IDS.
In the public clouds, an adversary can co-locate his or her virtual machines (VMs) with others on the same physical servers to start an attack against the integrity, confidentiality or availability. The one important factor to decrease the likelihood of this co-location attack is the VMs placement strategy. However, a co-location resistant strategy will compromise the resources optimization of the cloud providers. The tradeoff between security and resources optimization introduces one of the most crucial challenges in the cloud security. In this work we propose a placement strategy allowing the decrease of co-location rate by compromising the VM startup time instead of the optimization of resources. We give a mathematical analysis to quantify the co-location resistance. The proposed strategy is evaluated against the abusing placement locality, where the attack and target VMs are launched simultaneously or within a short time window. Referring to EC2 placement strategy, the best co-location resistant strategy out of the existing public cloud providers strategies, our strategy decreases enormously the co-location attacks with a slight VM startup delay (relatively to the actual VM startup delay in the public cloud providers).
This article deals with color images steganalysis based on machine learning. The proposed approach enriches the features from the Color Rich Model by adding new features obtained by applying steerable Gaussian filters and then computing the co-occurrence of pixel pairs. Adding these new features to those obtained from Color-Rich Models allows us to increase the detectability of hidden messages in color images. The Gaussian filters are angled in different directions to precisely compute the tangent of the gradient vector. Then, the gradient magnitude and the derivative of this tangent direction are estimated. This refined method of estimation enables us to unearth the minor changes that have occurred in the image when a message is embedded. The efficiency of the proposed framework is demonstrated on three stenographic algorithms designed to hide messages in images: S-UNIWARD, WOW, and Synch-HILL. Each algorithm is tested using different payload sizes. The proposed approach is compared to three color image steganalysis methods based on computation features and Ensemble Classifier classification: the Spatial Color Rich Model, the CFA-aware Rich Model and the RGB Geometric Color Rich Model.
In this paper, a dual-field elliptic curve cryptographic processor is proposed to support arbitrary curves within 576-bit in dual field. Besides, two heterogeneous function units are coupled with the processor for the parallel operations in finite field based on the analysis of the characteristics of elliptic curve cryptographic algorithms. To simplify the hardware complexity, the clustering technology is adopted in the processor. At last, a fast Montgomery modular division algorithm and its implementation is proposed based on the Kaliski's Montgomery modular inversion. Using UMC 90-nm CMOS 1P9M technology, the proposed processor occupied 0.86-mm2 can perform the scalar multiplication in 0.34ms in GF(p160) and 0.22ms in GF(2160), respectively. Compared to other elliptic curve cryptographic processors, our design is advantageous in hardware efficiency and speed moderation.
Existing compact routing schemes, e.g., Thorup and Zwick [SPAA 2001] and Chechik [PODC 2013], often have no means to tolerate failures, once the system has been setup and started. This paper presents, to our knowledge, the first self-healing compact routing scheme. Besides, our schemes are developed for low memory nodes, i.e., nodes need only O(log2 n) memory, and are thus, compact schemes. We introduce two algorithms of independent interest: The first is CompactFT, a novel compact version (using only O(log n) local memory) of the self-healing algorithm Forgiving Tree of Hayes et al. [PODC 2008]. The second algorithm (CompactFTZ) combines CompactFT with Thorup-Zwick's tree-based compact routing scheme [SPAA 2001] to produce a fully compact self-healing routing scheme. In the self-healing model, the adversary deletes nodes one at a time with the affected nodes self-healing locally by adding few edges. CompactFT recovers from each attack in only O(1) time and Δ messages, with only +3 degree increase and O(logΔ) graph diameter increase, over any sequence of deletions (Δ is the initial maximum degree). Additionally, CompactFTZ guarantees delivery of a packet sent from sender s as long as the receiver t has not been deleted, with only an additional O(y logΔ) latency, where y is the number of nodes that have been deleted on the path between s and t. If t has been deleted, s gets informed and the packet removed from the network.
Reactive synthesis with the ambitious goal of automatically synthesizing correct-by-construction controllers from high-level specifications, has recently attracted significant attention in system design and control. In practice, complex systems are often not constructed from scratch but from a set of existing building blocks. For example in robot motion planning, a robot usually has a number of predefined motion primitives that can be selected and composed to enforce a high-level objective. In this paper, we propose a novel framework for synthesis from a library of parametric and reactive controllers. Parameters allow us to take advantage of the symmetry in many synthesis problems. Reactivity of the controllers takes into account that the environment may be dynamic and potentially adversarial. We first show how these controllers can be automatically constructed from parametric objectives specified by the user to form a library of parametric and reactive controllers. We then give a synthesis algorithm that selects and instantiates controllers from the library in order to satisfy a given linear temporal logic objective. We implement our algorithms symbolically and illustrate the potential of our method by applying it to an autonomous vehicle case study.
Learning to rank (L2R) algorithms use a labeled training set to generate a ranking model that can be later used to rank new query results. These training sets are very costly and laborious to produce, requiring human annotators to assess the relevance or order of the documents in relation to a query. Active learning (AL) algorithms are able to reduce the labeling effort by actively sampling an unlabeled set and choosing data instances that maximize the effectiveness of a learning function. But AL methods require constant supervision, as documents have to be labeled at each round of the process. In this paper, we propose that certain characteristics of unlabeled L2R datasets allow for an unsupervised, compression-based selection process to be used to create small and yet highly informative and effective initial sets that can later be labeled and used to bootstrap a L2R system. We implement our ideas through a novel unsupervised selective sampling method, which we call Cover, that has several advantages over AL methods tailored to L2R. First, it does not need an initial labeled seed set and can select documents from scratch. Second, selected documents do not need to be labeled as the iterations of the method progress since it is unsupervised (i.e., no learning model needs to be updated). Thus, an arbitrarily sized training set can be selected without human intervention depending on the available budget. Third, the method is efficient and can be run on unlabeled collections containing millions of query-document instances. We run various experiments with two important L2R benchmarking collections to show that the proposed method allows for the creation of small, yet very effective training sets. It achieves full training-like performance with less than 10% of the original sets selected, outperforming the baselines in both effectiveness and scalability.
Compressive sensing is a new technique by which sparse signals are sampled and recovered from a few measurements. To address the disadvantages of traditional space image compressing methods, a complete new compressing scheme under the compressive sensing framework was developed in this paper. Firstly, in the coding stage, a simple binary measurement matrix was constructed to obtain signal measurements. Secondly, the input image was divided into small blocks. The image blocks then would be used as training sets to get a dictionary basis for sparse representation with learning algorithm. At last, sparse reconstruction algorithm was used to recover the original input image. Experimental results show that both the compressing rate and image recovering quality of the proposed method are high. Besides, as the computation cost is very low in the sampling stage, it is suitable for on-board applications in astronomy.
The rapid growth in the size and scope of datasets in science and technology has created a need for novel foundational perspectives on data analysis that blend the inferential and computational sciences. That classical perspectives from these fields are not adequate to address emerging problems in "Big Data" is apparent from their sharply divergent nature at an elementary level-in computer science, the growth of the number of data points is a source of "complexity" that must be tamed via algorithms or hardware, whereas in statistics, the growth of the number of data points is a source of "simplicity" in that inferences are generally stronger and asymptotic results can be invoked. On a formal level, the gap is made evident by the lack of a role for computational concepts such as "runtime" in core statistical theory and the lack of a role for statistical concepts such as "risk" in core computational theory. I present several research vignettes aimed at bridging computation and statistics, including the problem of inference under privacy and communication constraints, and ways to exploit parallelism so as to trade off the speed and accuracy of inference.
Cryptographic systems are vulnerable to random errors and injected faults. Soft errors can inadvertently happen in critical cryptographic modules and attackers can inject faults into systems to retrieve the embedded secret. Different schemes have been developed to improve the security and reliability of cryptographic systems. As the new SHA-3 standard, Keccak algorithm will be widely used in various cryptographic applications, and its implementation should be protected against random errors and injected faults. In this paper, we devise different parity checking methods to protect the operations of Keccak. Results show that our schemes can be easily implemented and can effectively protect Keccak system against random errors and fault attacks.
In this paper, we present a new kind of near-optimal double-layered syndrome-trellis codes (STCs) for spatial domain steganography. The STCs can hide longer message or improve the security with the same-length message comparing to the previous double-layered STCs. In our scheme, according to the theoretical deduction we can more precisely divide the secret payload into two parts which will be embedded in the first layer and the second layer of the cover respectively with binary STCs. When embed the message, we encourage to realize the double-layered embedding by ±1 modifications. But in order to further decrease the modifications and improve the time efficient, we allow few pixels to be modified by ±2. Experiment results demonstrate that while applying this double-layered STCs to the adaptive steganographic algorithms, the embedding modifications become more concentrative and the number decreases, consequently the security of steganography is improved.
In this work, we constructively combine adaptive wormholes with channel-reciprocity based key establishment (CRKE), which has been proposed as a lightweight security solution for IoT devices and might be even more important for the 5G Tactile Internet and its embedded low-end devices. We present a new secret key generation protocol where two parties compute shared cryptographic keys under narrow-band multi-path fading models over a delayed digital channel. The proposed approach furthermore enables distance-bounding the key establishment process via the coherence time dependencies of the wireless channel. Our scheme is thoroughly evaluated both theoretically and practically. For the latter, we used a testbed based on the IEEE 802.15.4 standard and performed extensive experiments in a real-world manufacturing environment. Additionally, we demonstrate adaptive wormhole attacks (AWOAs) and their consequences on several physical-layer security schemes. Furthermore, we proposed a countermeasure that minimizes the risk of AWOAs.
In most adaptive video streaming systems adaptation decisions rely solely on the available network resources. As the content of a video has a large influence on the perception of quality our belief is that this is not sufficient. Thus, we have proposed a support service for content-aware video adaptation on mobile devices: Video Adaptation Service (VAS). Based on the content of a streamed video, the adaptation process is improved by setting a target quality level for a session based on an objective video quality metric. In this work, we demonstrate VAS and its advantages of a reduced data traffic by only streaming the lowest video representation which is necessary to reach a desired quality. By leveraging the content properties of a video stream, the system is able to keep a stable video quality and at the same time reduce the network load.
Recent advancement of smart devices and wearable tech-nologies greatly enlarges the variety of personal data people can track. Applications and services can leverage such data to provide better life support, but also impose privacy and security threats. Obfuscation schemes, consequently, have been developed to retain data access while mitigate risks. Compared to offering choices of releasing raw data and not releasing at all, we examine the effect of adding a data obfuscation option on users' disclosure decisions when configuring applications' access, and how that effect varies with data types and application contexts. Our online user experiment shows that users are less likely to block data access when the obfuscation option is available except for locations. This effect significantly differs between applications for domain-specific dynamic tracking data, but not for generic personal traits. We further unpack the role of context and discuss the design opportunities.
Usage patterns of mobile devices depend on a variety of factors such as time, location, and previous actions. Hence, context-awareness can be the key to make mobile systems to become personalized and situation dependent in managing their resources. We first reveal new findings from our own Android user experiment: (i) the launching probabilities of applications follow Zipf's law, and (ii) inter-running and running times of applications conform to log-normal distributions. We also find context-dependency in application usage patterns, for which we classify contexts in a personalized manner with unsupervised learning methods. Using the knowledge acquired, we develop a novel context-aware application scheduling framework, CAS that adaptively unloads and preloads background applications in a timely manner. Our trace-driven simulations with 96 user traces demonstrate the benefits of CAS over existing algorithms. We also verify the practicality of CAS by implementing it on the Android platform.
In this paper, we present an architecture and implementation of a secure, automated, proximity-based access control that we refer to as Context-Aware System to Secure Enterprise Content (CASSEC). Using the pervasive WiFi and Bluetooth wireless devices as components in our underlying positioning infrastructure, CASSEC addresses two proximity-based scenarios often encountered in enterprise environments: Separation of Duty and Absence of Other Users. The first scenario is achieved by using Bluetooth MAC addresses of nearby occupants as authentication tokens. The second scenario exploits the interference of WiFi received signal strength when an occupant crosses the line of sight (LOS). Regardless of the scenario, information about the occupancy of a particular location is periodically extracted to support continuous authentication. To the best of our knowledge, our approach is the first to incorporate WiFi signal interference caused by occupants as part of proximity-based access control system. Our results demonstrate that it is feasible to achieve great accuracy in localization of occupants in a monitored room.
Mobile devices, such as smarthphones, became a common tool in our daily routine. Mobile Applications (a.k.a. apps) are demanding access to contextual information increasingly. For instance, apps require user's environment data as well as their profiles in order to adapt themselves (interfaces, services, content) according to this context data. Mobile apps with this behavior are known as context-aware applications (CAS). Several software infrastructures have been created to help the development of CAS. However, most of them do not store the contextual data, once mobile devices are resource constrained. They are not built taking into account the privacy of contextual data either, due the fact that apps may expose contextual data, without user consent. This paper addresses these topics by extending an existing middleware platform that help the development of mobile context-aware applications. Our extension aims at store and process the contextual data generated from several mobile devices, using the computational power of the cloud, and the definition of privacy policies, which avoid dissemination of unauthorized contextual data.
We present a controller synthesis algorithm for a reach-avoid problem in the presence of adversaries. Our model of the adversary abstractly captures typical malicious attacks envisioned on cyber-physical systems such as sensor spoofing, controller corruption, and actuator intrusion. After formulating the problem in a general setting, we present a sound and complete algorithm for the case with linear dynamics and an adversary with a budget on the total L2-norm of its actions. The algorithm relies on a result from linear control theory that enables us to decompose and compute the reachable states of the system in terms of a symbolic simulation of the adversary-free dynamics and the total uncertainty induced by the adversary. With this decomposition, the synthesis problem eliminates the universal quantifier on the adversary's choices and the symbolic controller actions can be effectively solved using an SMT solver. The constraints induced by the adversary are computed by solving second-order cone programmings. The algorithm is later extended to synthesize state-dependent controller and to generate attacks for the adversary. We present preliminary experimental results that show the effectiveness of this approach on several example problems.
Wireless sensor networks (WSNs) are implemented in various Internet-of-Things applications such as energy management systems. As the applications may involve personal information, they must be protected from attackers attempting to read information or control network devices. Research on WSN security is essential to protect WSNs from attacks. Studies in such research domains propose solutions against the attacks. However, they focus mainly on the security measures rather than on their ease in implementation in WSNs. In this paper, we propose a coordination middleware that provides an environment for constructing updatable WSNs for security. The middleware is based on LINC, a rule-based coordination middleware. The proposed approach allows the development of WSNs and attaches or detaches security modules when required. We implemented three security modules on LINC and on a real network, as case studies. Moreover, we evaluated the implementation costs while comparing the case studies.
Critical infrastructure such as the power grid has become increasingly complex. The addition of computing elements to traditional physical components increases complexity and hampers insight into how elements in the system interact with each other. The result is an infrastructure where operational mistakes, some of which cannot be distinguished from attacks, are more difficult to prevent and have greater potential impact, such as leaking sensitive information to the operator or attacker. In this paper, we present CPAC, a cyber-physical access control solution to manage complexity and mitigate threats in cyber-physical environments, with a focus on the electrical smart grid. CPAC uses information flow analysis based on mathematical models of the physical grid to generate policies enforced through verifiable logic. At the device side, CPAC combines symbolic execution with lightweight dynamic execution monitoring to allow non-intrusive taint analysis on programmable logic controllers in realtime. These components work together to provide a realtime view of all system elements, and allow for more robust and finer-grained protections than any previous solution to securing the grid. We implement a prototype of CPAC using Bachmann PLCs and evaluate several real-world incidents that demonstrate its scalability and effectiveness. The policy checking for a nation-wide grid is less than 150 ms, faster than existing solutions. We additionally show that CPAC can analyze potential component failures for arbitrary component failures, far beyond the capabilities of currently deployed systems. CPAC thus provides a solution to secure the modern smart grid from operator mistakes or insider attacks, maintain operational privacy, and support N - x contingencies.
In the last decades, there have been much more public health crises in the world such as H1N1, H7N9 and Ebola out-break. In the same time, it has been proved that our world has come into the time when public crisis accidents number was growing fast. Sometimes, crisis response to these public emergency accidents is involved in a complex system consisting of cyber, physics and society domains (CPS Model). In order to collect and analyze these accidents with higher efficiency, we need to design and adopt some new tools and models. In this paper, we used CPS Model based Online Opinion Governance system which constructed on cellphone APP for data collection and decision making in the back end. Based on the online opinion data we collected, we also proposed the graded risk classification. By the risk classification method, we have built an efficient CPS Model based simulated emergency accident replying and handling system. It has been proved useful in some real accidents in China in recent years.
Cloud providers are in a position to greatly improve the trust clients have in network services: IaaS platforms can isolate services so they cannot leak data, and can help verify that they are securely deployed. We describe a new system called CQSTR that allows clients to verify a service's security properties. CQSTR provides a new cloud container abstraction similar to Linux containers but for VM clusters within IaaS clouds. Cloud containers enforce constraints on what software can run, and control where and how much data can be communicated across service boundaries. With CQSTR, IaaS providers can make assertions about the security properties of a service running in the cloud. We investigate implementations of CQSTR on both Amazon AWS and OpenStack. With AWS, we build on virtual private clouds to limit network access and on authorization mechanisms to limit storage access. However, with AWS certain security properties can be checked only by monitoring audit logs for violations after the fact. We modified OpenStack to implement the full CQSTR model with only modest code changes. We show how to use CQSTR to build more secure deployments of the data analytics frameworks PredictionIO, PacketPig, and SpamAssassin. In experiments on CloudLab we found that the performance impact of CQSTR on applications is near zero.
The anonymous password authentication scheme proposed in ACSAC'10 under an unorthodox approach of password wrapped credentials advanced anonymous password authentication to be a practically ready primitive, and it is being standardized. In this paper, we improve on that scheme by proposing a new method of "public key suppression" for achieving server-designated credential verifiability, a core technicality in materializing the concept of password wrapped credential. Besides better performance, our new method simplifies the configuration of the authentication server, rendering the resulting scheme even more practical. Further, we extend the idea of password wrapped credential to biometric wrapped credential\vphantom\\, to achieve anonymous biometric authentication. As expected, biometric wrapped credentials help break the linear server-side computation barrier intrinsic in the standard setting of biometric authentication. Experimental results validate the feasibility of realizing efficient anonymous biometric authentication.
Mobile applications - or apps - are one of the main reasons for the unprecedented success smart phones and tablets have experienced over the last decade. Apps are the main interfaces that users deal with when engaging in online banking, checking travel itineraries, or browsing their social network profiles while on the go. Previous research has studied various aspects of mobile application security including data leakage and privilege escalation through confused deputy attacks. However, the vast majority of mobile application research targets Google's Android platform. Few research papers analyze iOS applications and those that focus on the Apple environment perform their analysis on comparatively small datasets (i.e., thousands in iOS vs. hundreds of thousands in Android). As these smaller datasets call into question how representative the gained results are, we propose, implement, and evaluate CRiOS, a fully-automated system that allows us to amass comprehensive datasets of iOS applications which we subject to large-scale analysis. To advance academic research into the iOS platform and its apps, we plan on releasing CRiOS as an open source project. We also use CRiOS to aggregate a dataset of 43,404 iOS applications. Equipped with this dataset we analyze the collected apps to identify third-party libraries that are common among many applications. We also investigate the network communication endpoints referenced by the applications with respect to the endpoints' correct use of TLS/SSL certificates. In summary, we find that the average iOS application consists of 60.2% library classes and only 39.8% developer-authored content. Furthermore, we find that 9.32% of referenced network connection endpoints either entirely omit to cryptographically protect network communications or present untrustworthy SSL certificates.