Biblio
Cyber-physical system integrity requires both hardware and software security. Many of the cyber attacks are successful as they are designed to selectively target a specific hardware or software component in an embedded system and trigger its failure. Existing security measures also use attack vector models and isolate the malicious component as a counter-measure. Isolated security primitives do not provide the overall trust required in an embedded system. Trust enhancements are proposed to a hardware security platform, where the trust specifications are implemented in both software and hardware. This distribution of trust makes it difficult for a hardware-only or software-only attack to cripple the system. The proposed approach is applied to a smart grid application consisting of third-party soft IP cores, where an attack on this module can result in a blackout. System integrity is preserved in the event of an attack and the anomalous behavior of the IP core is recorded by a supervisory module. The IP core also provides a snapshot of its trust metric, which is logged for further diagnostics.
The Internet of Things(IoT) has become a popular technology, and various middleware has been proposed and developed for IoT systems. However, there have been few studies on the data management of IoT systems. In this paper, we consider graph database models for the data management of IoT systems because these models can specify relationships in a straightforward manner among entities such as devices, users, and information that constructs IoT systems. However, applying a graph database to the data management of IoT systems raises issues regarding distribution and security. For the former issue, we propose graph database operations integrated with REST APIs. For the latter, we extend a graph edge property by adding access protocol permissions and checking permissions using the APIs with authentication. We present the requirements for a use case scenario in addition to the features of a distributed graph database for IoT data management to solve the aforementioned issues, and implement a prototype of the graph database.
In this paper, the design of an event-driven middleware for general purpose services in smart grid (SG) is presented. The main purpose is to provide a peer-to-peer distributed software infrastructure to allow the access of new multiple and authorized actors to SGs information in order to provide new services. To achieve this, the proposed middleware has been designed to be: 1) event-based; 2) reliable; 3) secure from malicious information and communication technology attacks; and 4) to enable hardware independent interoperability between heterogeneous technologies. To demonstrate practical deployment, a numerical case study applied to the whole U.K. distribution network is presented, and the capabilities of the proposed infrastructure are discussed.
Attacks of Ransomware are increasing, this form of malware bypasses many technical solutions by leveraging social engineering methods. This means established methods of perimeter defence need to be supplemented with additional systems. Honeypots are bogus computer resources deployed by network administrators to act as decoy computers and detect any illicit access. This study investigated whether a honeypot folder could be created and monitored for changes. The investigations determined a suitable method to detect changes to this area. This research investigated methods to implement a honeypot to detect ransomware activity, and selected two options, the File Screening service of the Microsoft File Server Resource Manager feature and EventSentry to manipulate the Windows Security logs. The research developed a staged response to attacks to the system along with thresholds when there were triggered. The research ascertained that witness tripwire files offer limited value as there is no way to influence the malware to access the area containing the monitored files.
A hardware Trojan (HT) detection method is presented that is based on measuring and detecting small systematic changes in path delays introduced by capacitive loading effects or series inserted gates of HTs. The path delays are measured using a high resolution on-chip embedded test structure called a time-to-digital converter (TDC) that provides approx. 25 ps of timing resolution. A calibration method for the TDC as well as a chip-averaging technique are demonstrated to nearly eliminate chip-to-chip and within-die process variation effects on the measured path delays across chips. This approach significantly improves the correlation between Trojan-free chips and a simulation-based golden model. Path delay tests are applied to multiple copies of a 90nm custom ASIC chip having two copies of an AES macro. The AES macros are exact replicas except for the insertion of several additional gates in the second hardware copy, which are designed to model HTs. Simple statistical detection methods are used to isolate and detect systematic changes introduced by these additional gates. We present hardware results which demonstrate that our proposed chip-averaging and calibration techniques in combination with a single nominal simulation model can be used to detect small delay anomalies introduced by the inserted gates of hardware Trojans.
Present work deals with prediction of damage location in a composite cantilever beam using signal from optical fiber sensor coupled with a neural network with back propagation based learning mechanism. The experimental study uses glass/epoxy composite cantilever beam. Notch perpendicular to the axis of the beam and spanning throughout the width of the beam is introduced at three different locations viz. at the middle of the span, towards the free end of the beam and towards the fixed end of the beam. A plastic optical fiber of 6 cm gage length is mounted on the top surface of the beam along the axis of the beam exactly at the mid span. He-Ne laser is used as light source for the optical fiber and light emitting from other end of the fiber is converted to electrical signal through a converter. A three layer feed forward neural network architecture is adopted having one each input layer, hidden layer and output layer. Three features are extracted from the signal viz. resonance frequency, normalized amplitude and normalized area under resonance frequency. These three features act as inputs to the neural network input layer. The outputs qualitatively identify the location of the notch.
In a discrete-time linear multi-agent control system, where the agents are coupled via an environmental state, knowledge of the environmental state is desirable to control the agents locally. However, since the environmental state depends on the behavior of the agents, sharing it directly among these agents jeopardizes the privacy of the agents' pro les, de ned as the combination of the agents' initial states and the sequence of local control inputs over time. A commonly used solution is to randomize the environmental state before sharing { this leads to a natural trade-o between the privacy of the agents' pro les and the variance of estimating the environmental state. By treating the multi-agent system as a probabilistic model of the environmental state parametrized by the agents' pro les, we show that when the agents' pro les is "-di erentially private, there is a lower bound on the `1 induced norm of the covariance matrix of the minimum-variance unbiased estimator of the environmental state. This lower bound is achieved by a randomized mechanism that uses Laplace noise.
Abstract—In this work, we study the problem of keeping the objective functions of individual agents "-differentially private in cloud-based distributed optimization, where agents are subject to global constraints and seek to minimize local objective functions. The communication architecture between agents is cloud-based – instead of communicating directly with each other, they oordinate by sharing states through a trusted cloud computer. In this problem, the difficulty is twofold: the objective functions are used repeatedly in every iteration, and the influence of erturbing them extends to other agents and lasts over time. To solve the problem, we analyze the propagation of perturbations on objective functions over time, and derive an upper bound on them. With the upper bound, we design a noise-adding mechanism that randomizes the cloudbased distributed optimization algorithm to keep the individual objective functions "-differentially private. In addition, we study the trade-off between the privacy of objective functions and the performance of the new cloud-based distributed optimization algorithm with noise. We present simulation results to numerically verify the theoretical results presented.
Visible Light Communication (VLC) emerges as a new wireless communication technology with appealing benefits not present in radio communication. However, current VLC designs commonly require LED lights to emit shining light beams, which greatly limits the applicable scenarios of VLC (e.g., in a sunny day when indoor lighting is not needed). It also entails high energy overhead and unpleasant visual experiences for mobile devices to transmit data using VLC. We design and develop DarkLight, a new VLC primitive that allows light-based communication to be sustained even when LEDs emit extremely-low luminance. The key idea is to encode data into ultra-short, imperceptible light pulses. We tackle challenges in circuit designs, data encoding/decoding schemes, and DarkLight networking, to efficiently generate and reliably detect ultra-short light pulses using off-the-shelf, low-cost LEDs and photodiodes. Our DarkLight prototype supports 1.3-m distance with 1.6-Kbps data rate. By loosening up VLC's reliance on visible light beams, DarkLight presents an unconventional direction of VLC design and fundamentally broadens VLC's application scenarios.
In this study, we are using a multi-party recording as a template for building a parametric speech synthesiser which is able to express different levels of attentiveness in backchannel tokens. This allowed us to investigate i) whether it is possible to express the same perceived level of attentiveness in synthesised than in natural backchannels; ii) whether it is possible to increase and decrease the perceived level of attentiveness of backchannels beyond the range observed in the original corpus.
Different applications concurrently running on modern MPSoCs can interfere with each other when they use shared resources. This interference can cause side channels, i.e., sources of unintended information flow between applications. To prevent such side channels, we propose a hybrid mapping methodology that attempts to ensure spatial isolation, i.e., a mutually-exclusive allocation of resources to applications in the MPSoC. At design time and as a first step, we compute compact and connected application mappings (called shapes). In a second step, run-time management uses this information to map multiple spatially segregated shapes to the architecture. We present and evaluate a (fast) heuristic and an (exact) SAT-based mapper, demonstrating the viability of the approach.
We present PrivInfer, an expressive framework for writing and verifying differentially private Bayesian machine learning algorithms. Programs in PrivInfer are written in a rich functional probabilistic programming language with constructs for performing Bayesian inference. Then, differential privacy of programs is established using a relational refinement type system, in which refinements on probability types are indexed by a metric on distributions. Our framework leverages recent developments in Bayesian inference, probabilistic programming languages, and in relational refinement types. We demonstrate the expressiveness of PrivInfer by verifying privacy for several examples of private Bayesian inference.
Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems: in some cases, a big graph is chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to certain suboptimality from the interaction among different pieces. In other cases, links between different parts may show up in the running time and/or network communications cost, hence the desire to have small cut size. We study a distributed balanced partitioning problem where the goal is to partition the vertices of a given graph into k pieces, minimizing the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks, e.g., MapReduce. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, e.g., via a hierarchical clustering or Hilbert curves. Then we apply four different techniques such as local swaps, minimum cuts on partition boundaries, as well as contraction and dynamic programming. Our empirical study compares the above techniques with each other, and to previous work in distributed algorithms, e.g., a label propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: we notice, e.g., 15-25% reduction in cut size over [UB13]. We also observe that our algorithms allow for scalable distributed implementation for any number of partitions. Finally, we apply our techniques for the Google Maps Driving Directions to minimize the number of multi-shard queries with the goal of saving in CPU usage. During live experiments, we observe an ≈ 40% drop in the number of multi-shard queries when comparing our method with a standard geography-based method.
In light of the prevalent trend towards dense HetNets, the conventional coupled user association, where mobile device uses the same base station (BS) for both uplink and downlink traffic, is being questioned and the alternative and more general downlink/uplink decoupling paradigm is emerging. We focus on designing an effective user association mechanism for HetNets with downlink/uplink decoupling, which has started to receive more attention. We use a combination of matching theory and stochastic geometry. We model the problem as a matching with contracts game by drawing an analogy with the hospital-doctor matching problem. In our model, we use stochastic geometry to derive a closed-form expression for matching utility function. Our model captures different objectives between users in the uplink/downlink directions and also from the perspective of BSs. Based on this game model, we present a matching algorithm for decoupled uplink/downlink user association that results in a stable allocation. Simulation results demonstrate that our approach provides close-to-optimal performance, and significant gains over alternative approaches for user association in the decoupled context as well as the traditional coupled user association; these gains are a result of the holistic nature of our approach that accounts for the additional cost associated with decoupling and inter-dependence between uplink and downlink associations. Our work is also the first in the wireless communications domain to employ matching with contracts approach.
Secure Data Sharing (SDS) enables users to share data in the cloud in a confidential and integrity-preserving manner. Many recent SDS approaches are based on Attribute-Based Encryption (ABE), leveraging the advantage that ABE allows to address a multitude of users with only one ciphertext. However, ABE approaches often come with the downside that they require a central fully-trusted entity that is able to decrypt any ciphertext in the system. In this paper, we investigate on whether ABE could be used to efficiently implement Decentralized Secure Data Sharing (D-SDS), which explicitly demands that the authorization and access control enforcement is carried out solely by the owner of the data, without the help of a fully-trusted third party. For this purpose, we did a comprehensive analysis of recent ABE approaches with regard to D-SDS requirements. We found one ABE approach to be suitable, and we show different alternatives to employ this ABE approach in a group-based D-SDS scenario. For a realistic estimation of the resource consumption, we give concrete resource consumption values for workloads taken from real-world system traces and exemplary up-to-date mobile devices. Our results indicate that for the most D-SDS operations, the resulting computation times and outgoing network traffic will be acceptable in many use cases. However, the computation times and outgoing traffic for the management of large groups might prevent using mobile devices.
We consider wireless networks in which the effects of interference are determined by the SINR model. We address the question of structuring distributed communication when stations have very limited individual capabilities. In particular, nodes do not know their geographic coordinates, neighborhoods or even the size n of the network, nor can they sense collisions. Each node is equipped only with its unique name from a range \1, ..., N\. We study the following three settings and distributed algorithms for communication problems in each of them. In the uncoordinated-start case, when one node starts an execution and other nodes are awoken by receiving messages from already awoken nodes, we present a randomized broadcast algorithm which wakes up all the nodes in O(n log2 N) rounds with high probability. In the synchronized-start case, when all the nodes simultaneously start an execution, we give a randomized algorithm that computes a backbone of the network in O(Δ log7 N) rounds with high probability. Finally, in the partly-coordinated-start case, when a number of nodes start an execution together and other nodes are awoken by receiving messages from the already awoken nodes, we develop an algorithm that creates a backbone network in time O(n log2 N + Δ log7 N) with high probability.
Trust in SSL-based communication on the Internet is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) making sure that it is not revoked. Currently, Certificate Revocation checks (i.e. step (iii) above) are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, both current approaches tend to incur such a high overhead that several browsers (including almost all mobile ones) choose not to check certificate revocation status, thereby exposing their users to significant security risks. To address this issue, we propose DCSP: a new low-latency approach that provides up-to-date and accurate certificate revocation information. DCSP capitalizes on the existing scalable and high-performance infrastructure of DNS. DCSP minimizes end user latency while, at the same time, requiring only a small number of cryptographic signatures by the CAs. Our design and initial performance results show that DCSP has the potential to perform an order of magnitude faster than the current state-of-the-art alternatives.