Biblio
Blockchain, the underlying technology of cryptocurrency networks like Bitcoin, can prove to be essential towards realizing the vision of a decentralized, secure, and open Internet of Things (IoT) revolution. There is a growing interest in many research groups towards leveraging blockchains to provide IoT data privacy without the need for a centralized data access model. This paper aims to propose a decentralized access model for IoT data, using a network architecture that we call a modular consortium architecture for IoT and blockchains. The proposed architecture facilitates IoT communications on top of a software stack of blockchains and peer-to-peer data storage mechanisms. The architecture is aimed to have privacy built into it, and to be adaptable for various IoT use cases. To understand the feasibility and deployment considerations for implementing the proposed architecture, we conduct performance analysis of existing blockchain development platforms, Ethereum and Monax.
This work concerns distributed consensus algorithms and application to a network intrusion detection system (NIDS) [21]. We consider the problem of defending the system against multiple data falsification attacks (Byzantine attacks), a vulnerability of distributed peer-to-peer consensus algorithms that has not been widely addressed in its practicality. We consider both naive (independent) and colluding attackers. We test three defense strategy implementations, two classified as outlier detection methods and one reputation-based method. We have narrowed our attention to outlier and reputation-based methods because they are relatively light computationally speaking. We have left out control theoretic methods which are likely the most effective methods, however their computational cost increase rapidly with the number of attackers. We compare the efficiency of these three implementations for their computational cost, detection performance, convergence behavior and possible impacts on the intrusion detection accuracy of the NIDS. Tests are performed based on simulations of distributed denial of service attacks using the KSL-KDD data set.
In this paper, we scrutinize a way through which covert messages are sent and received using the Network Time Protocol (NTP), which is not easily detected since NTP should be present in most environment to synchronize the clock between clients and servers using at least one time server. We also present a proof of concept and investigate the throughput and robustness of this covert channel. This channel will use the 32 bits of fraction of seconds in timestamp to send the covert message. It also uses "Peer Clock Precision" field to track the messages between sender and receiver.
Location-based Services (LBSs) provide valuable features but can also reveal sensitive user information. Decentralized privacy protection removes the need for a so-called anonymizer, but relying on peers is a double-edged sword: adversaries could mislead with fictitious responses or even collude to compromise their peers' privacy. We address here exactly this problem: we strengthen the decentralized LBS privacy approach, securing peer-to-peer (P2P) interactions. Our scheme can provide precise timely P2P responses by passing proactively cached Point of Interest (POI) information. It reduces the exposure both to the honest-but-curious LBS servers and peer nodes. Our scheme allows P2P responses to be validated with very low fraction of queries affected even if a significant fraction of nodes are compromised. The exposure can be kept very low even if the LBS server or a large set of colluding curious nodes collude with curious identity management entities.
Vehicular Ad-Hoc Networks (VANET) are the creation of several vehicles communicating with each other in order to create a network capable of communication and data exchange. One of the most promising methods for security and trust amongst vehicular networks is the usage of Public Key Infrastructure (PKI). However, current implementations of PKI as a security solution for determining the validity and authenticity of vehicles in a VANET is not efficient due to the usage of large amounts of delay and computational overhead. In this paper, we investigate the potential of PKI when predictively and preemptively passing along certificates to roadside units (RSU) in an effort to lower delay and computational overhead in a dynamic environment. We look to accomplish this through utilizing fog computing and propose a new protocol to pass certificates along the projected path.
Bitcoin seems to be the most successful cryptocurrency so far given the growing real life deployment and popularity. While Bitcoin requires clients to be online to perform transactions and a certain amount of time to verify them, there are many real life scenarios that demand for offline and immediate payments (e.g., mobile ticketing, vending machines, etc). However, offline payments in Bitcoin raise non-trivial security challenges, as the payee has no means to verify the received coins without having access to the Bitcoin network. Moreover, even online immediate payments are shown to be vulnerable to double-spending attacks. In this paper, we propose the first solution for Bitcoin payments, which enables secure payments with Bitcoin in offline settings and in scenarios where payments need to be immediately accepted. Our approach relies on an offline wallet and deploys several novel security mechanisms to prevent double-spending and to verify the coin validity in offline setting. These mechanisms achieve probabilistic security to guarantee that the attack probability is lower than the desired threshold. We provide a security and risk analysis as well as model security parameters for various adversaries. We further eliminate remaining risks by detection of misbehaving wallets and their revocation. We implemented our solution for mobile Android clients and instantiated an offline wallet using a microSD security card. Our implementation demonstrates that smooth integration over a very prevalent platform (Android) is possible, and that offline and online payments can practically co-exist. We also discuss alternative deployment approach for the offline wallet which does not leverage secure hardware, but instead relies on a deposit system managed by the Bitcoin network.
In this paper, we propose a new Blockchain-based message and revocation accountability system called Blackchain. Combining a distributed ledger with existing mechanisms for security in V2X communication systems, we design a distributed event data recorder (EDR) that satisfies traditional accountability requirements by providing a compressed global state. Unlike previous approaches, our distributed ledger solution provides an accountable revocation mechanism without requiring trust in a single misbehavior authority, instead allowing a collaborative and transparent decision making process through Blackchain. This makes Blackchain an attractive alternative to existing solutions for revocation in a Security Credential Management System (SCMS), which suffer from the traditional disadvantages of PKIs, notably including centralized trust. Our proposal becomes scalable through the use of hierarchical consensus: individual vehicles dynamically create clusters, which then provide their consensus decisions as input for road-side units (RSUs), which in turn publish their results to misbehavior authorities. This authority, which is traditionally a single entity in the SCMS, responsible for the integrity of the entire V2X network, is now a set of authorities that transparently perform a revocation, whose result is then published in a global Blackchain state. This state can be used to prevent the issuance of certificates to previously malicious users, and also prevents the authority from misbehaving through the transparency implied by a global system state.
Delay-Tolerant Networks exhibit highly asynchronous connections often routed over many mobile hops before reaching its intended destination. The Bundle Security Protocol has been standardized providing properties such as authenticity, integrity, and confidentiality of bundles using traditional Public-Key Cryptography. Other protocols based on Identity-Based Cryptography have been proposed to reduce the key distribution overhead. However, in both schemes, secret keys are usually valid for several months. Thus, a secret key extracted from a compromised node allows for decryption of past communications since its creation. We solve this problem and propose the first forward secure protocol for Delay-Tolerant Networking. For this, we apply the Puncturable Encryption construction designed by Green and Miers, integrate it into the Bundle Security Protocol and adapt its parameters for different highly asynchronous scenarios. Finally, we provide performance measurements and discuss their impact.
In this paper, we address the problem of peer grouping employees in an organization for identifying security risks. Our motivation for studying peer grouping is its importance for a clear understanding of user and entity behavior analytics (UEBA) that is the primary tool for identifying insider threat through detecting anomalies in network traffic. We show that using Louvain method of community detection it is possible to automate peer group creation with feature-based weight assignments. Depending on the number of employees and their features we show that it is also possible to give each group a meaningful description. We present three new algorithms: one that allows an addition of new employees to already generated peer groups, another that allows for incorporating user feedback, and lastly one that provides the user with recommended nodes to be reassigned. We use Niara's data to validate our claims. The novelty of our method is its robustness, simplicity, scalability, and ease of deployment in a production environment.
Social awareness and social ties are becoming increasingly fashionable with emerging mobile and handheld devices. Social trust degree describing the strength of the social ties has drawn lots of research interests in many fields including secure cooperative communications. Such trust degree reflects the users' willingness for cooperation, which impacts the selection of the cooperative users in the practical networks. In this paper, we propose a cooperative relay and jamming selection scheme to secure communication based on the social trust degree under a stochastic geometry framework. We aim to analyze the involved secrecy outage probability (SOP) of the system's performance. To achieve this target, we propose a double Gamma ratio (DGR) approach through Gamma approximation. Based on this, the SOP is tractably obtained in closed form. The simulation results verify our theoretical findings, and validate that the social trust degree has dramatic influences on the network's secrecy performance.
Node compromising is still the most hard attack in Wireless Sensor Networks (WSNs). It affects key distribution which is a building block in securing communications in any network. The weak point of several roposed key distribution schemes in WSNs is their lack of resilience to node compromising attacks. When a node is compromised, all its key material is revealed leading to insecure communication links throughout the network. This drawback is more harmful for long-lived WSNs that are deployed in multiple phases, i.e., Multi-phase WSNs (MPWSNs). In the last few years, many key management schemes were proposed to ensure security in WSNs. However, these schemes are conceived for single phase WSNs and their security degrades with time when an attacker captures nodes. To deal with this drawback and enhance the resilience to node compromising over the whole lifetime of the network, we propose in this paper, a new key pre-distribution scheme adapted to MPWSNs. Our scheme takes advantage of the resilience improvement of Q-composite key scheme and adds self-healing which is the ability of the scheme to decrease the effect of node compromising over time. Self-healing is achieved by pre-distributing each generation with fresh keys. The evaluation of our scheme proves that it has a good key connectivity and a high resilience to node compromising attack compared to existing key management schemes.
Root cause analysis (RCA) is a common and recurring task performed by operators of cellular networks. It is done mainly to keep customers satisfied with the quality of offered services and to maximize return on investment (ROI) by minimizing and where possible eliminating the root causes of faults in cellular networks. Currently, the actual detection and diagnosis of faults or potential faults is still a manual and slow process often carried out by network experts who manually analyze and correlate various pieces of network data such as, alarms, call traces, configuration management (CM) and key performance indicator (KPI) data in order to come up with the most probable root cause of a given network fault. In this paper, we propose an automated fault detection and diagnosis solution called adaptive root cause analysis (ARCA). The solution uses measurements and other network data together with Bayesian network theory to perform automated evidence based RCA. Compared to the current common practice, our solution is faster due to automation of the entire RCA process. The solution is also cheaper because it needs fewer or no personnel in order to operate and it improves efficiency through domain knowledge reuse during adaptive learning. As it uses a probabilistic Bayesian classifier, it can work with incomplete data and it can handle large datasets with complex probability combinations. Experimental results from stratified synthesized data affirmatively validate the feasibility of using such a solution as a key part of self-healing (SH) especially in emerging self-organizing network (SON) based solutions in LTE Advanced (LTE-A) and 5G.
Spatial information network is an important part of the integrated space-terrestrial information network, its bearer services are becoming increasingly complex, and real-time requirements are also rising. Due to the structural vulnerability of the spatial information network and the dynamics of the network, this poses a serious challenge to how to ensure reliable and stable data transmission. The structural vulnerability of the spatial information network and the dynamics of the network brings a serious challenge of ensuring reliable and stable data transmission. Software Defined Networking (SDN), as a new network architecture, not only can quickly adapt to new business, but also make network reconfiguration more intelligent. In this paper, SDN is used to design the spatial information network architecture. An optimization algorithm for network self-healing based on SDN is proposed to solve the failure of switching node. With the guarantee of Quality of Service (QoS) requirement, the link is updated with the least link to realize the fast network reconfiguration and recovery. The simulation results show that the algorithm proposed in this paper can effectively reduce the delay caused by fault recovery.
A novel optical fiber sensing network is proposed to eliminate the effect of multiple fiber failures. Simulation results show that if the number of breakpoint in each subnet is less than four, the optical routing paths can be reset to avoid those breakpoints by changing the status of optical switches in the remote nodes.
In the multi-robot applications, the maintained and desired network may be destroyed by failed robots. The existing self-healing algorithms only handle with the case of single robot failure, however, multiple robot failures may cause several challenges, such as disconnected network and conflicts among repair paths. This paper presents a distributed self-healing algorithm based on 2-hop neighbor infomation to resolve the problems caused by multiple robot failures. Simulations and experiment show that the proposed algorithm manages to restore connectivity of the mobile robot network and improves the synchronization of the network globally, which validate the effectiveness of the proposed algorithm in resolving multiple robot failures.
NoCs are a well established research topic and several Implementations have been proposed for Self-healing. Self-healing refers to the ability of a system to detect faults or failures and fix them through healing or repairing. The main problems in current self-healing approaches are area overhead and scalability for complex structure since they are based on redundancy and spare blocks. Also, faulty router can isolate PE from other router nodes which can reduce the overall performance of the system. This paper presents a self-healing for a router to avoid denied fault PE function and isolation PE from other nodes. In the proposed design, the neighbor routers receive signal from a faulty router which keeps them to send the data packet which has only faulted router destination to a faulty router. Control unite turns on switches to connect four input ports to local ports successively to send coming packets to PE. The reliability of the proposed technique is studied and compared to conventional system with different failure rates. This approach is capable of healing 50% of the router. The area overhead is 14% for the proposed approach which is much lower compared to other approaches using redundancy.
Wireless sensor networks (WSNs) are one of the most rapidly developing information technologies and promise to have a variety of applications in Next Generation Networks (NGNs) including the IoT. In this paper, the focus will be on developing new methods for efficiently managing such large-scale networks composed of homogeneous wireless sensors/devices in urban environments such as homes, hospitals, stores and industrial compounds. Heterogeneous networks were proposed in a comparison with the homogeneous ones. The efficiency of these networks will depend on several optimization parameters such as the redundancy, as well as the percentages of coverage and energy saved. We tested the algorithm using different densities of sensors in the network and different values of tuning parameters for the optimization parameters. Obtained results show that our proposed algorithm performs better than the other greedy algorithm. Moreover, networks with more sensors maintain more redundancy and better percentage of coverage. However, it wastes more energy. The same method will be used for heterogeneous wireless sensors networks where devices have different characteristics and the network acts more efficient.
We present and explore a model of stateless and self-stabilizing distributed computation, inspired by real-world applications such as routing on today's Internet. Processors in our model do not have an internal state, but rather interact by repeatedly mapping incoming messages ("labels") to outgoing messages and output values. While seemingly too restrictive to be of interest, stateless computation encompasses both classical game-theoretic notions of strategic interaction and a broad range of practical applications (e.g., Internet protocols, circuits, diffusion of technologies in social networks). Our main technical contribution is a general impossibility result for stateless self-stabilization in our model, showing that even modest asynchrony (with wait times that are linear in the number of processors) can prevent a stateless protocol from reaching a stable global configuration. Furthermore, we present hardness results for verifying stateless self-stabilization. We also address several aspects of the computational power of stateless protocols. Most significantly, we show that short messages (of length that is logarithmic in the number of processors) yield substantial computational power, even on very poorly connected topologies.
In this paper we explore the opportunities, challenges and best practices around designing technologies for those affected by self-harm. Our work contributes to a growing HCI literature on mental health and wellbeing, as well as understandings of how to imbue appropriate value-sensitivity within the digital design process in these contexts. The first phase of our study was centred upon a hackathon during which teams of designers were asked to conceptualise and prototype digital products or services for those affected by self-harm. We discuss how value-sensitive actions and activities, including engagements with those with lived experiences of self-harm, were used to scaffold the conventional hackathon format in such a challenging context. Our approach was then extended through a series of critical engagements with clinicians and charity workers who provided appraisal of the prototypes and designs. Through analysis of these engagements we expose a number of design challenges for future HCI work that considers self-harm; moreover we offer insight into the role of stakeholder critiques in extending and rethinking hackathons as a design method in sensitive contexts.
The restoration of power distribution systems has a crucial role in the electric utility environment, taking into account both the pressure experienced by the operators that must choose the corrective actions to be followed in emergency restoration plans and the goals imposed by the regulatory agencies. In this sense, decision-aiding systems and self-healing networks may be good alternatives since they either perform an automated analysis of the situation, providing consistent and high-quality restoration plans, or even directly perform the restoration fast and automatically in both cases reducing the impacts caused by network disturbances. This work proposes a new restoration strategy which is novel in the sense it deals with the problem from the operator viewpoint, without simplifications that are used in most literature works. In this proposal, a permutation based genetic algorithm is employed to restore the maximum amount of loads, in real time, without depending on a priori knowledge of the location of the fault. To validate the proposed methodology two large real systems were tested: one with 2 substations, 5 feeders, 703 buses, and 132 switches, and; the other with 3 substations, 7 feeders, 21,633 buses, and 2,808 switches. These networks were tested considering situations of single and multiple failures. The results obtained were achieved with very low processing time (of the order of ten seconds), while compliance with all operational requirements was ensured.
Data provenance describes how data came to be in its present form. It includes data sources and the transformations that have been applied to them. Data provenance has many uses, from forensics and security to aiding the reproducibility of scientific experiments. We present CamFlow, a whole-system provenance capture mechanism that integrates easily into a PaaS offering. While there have been several prior whole-system provenance systems that captured a comprehensive, systemic and ubiquitous record of a system's behavior, none have been widely adopted. They either A) impose too much overhead, B) are designed for long-outdated kernel releases and are hard to port to current systems, C) generate too much data, or D) are designed for a single system. CamFlow addresses these shortcoming by: 1) leveraging the latest kernel design advances to achieve efficiency; 2) using a self-contained, easily maintainable implementation relying on a Linux Security Module, NetFilter, and other existing kernel facilities; 3) providing a mechanism to tailor the captured provenance data to the needs of the application; and 4) making it easy to integrate provenance across distributed systems. The provenance we capture is streamed and consumed by tenant-built auditor applications. We illustrate the usability of our implementation by describing three such applications: demonstrating compliance with data regulations; performing fault/intrusion detection; and implementing data loss prevention. We also show how CamFlow can be leveraged to capture meaningful provenance without modifying existing applications.
The frequency of power distribution networks in a power grid is called electrical network frequency (ENF). Because it provides the spatio-temporal changes of the power grid in a particular location, ENF is used in many application domains including the prediction of grid instability and blackouts, detection of system breakup, and even digital forensics. In order to build high performing applications and systems, it is necessary to capture a large-scale nationwide or worldwide ENF map. Consequently, many studies have been conducted on the distribution of specialized physical devices that capture the ENF signals. However, this approach is not practical because it requires significant effort from design to setup, moreover, it has a limitation in its efficiency to monitor and stably retain the collection equipment distributed throughout the world. Furthermore, this approach requires a significant budget. In this paper, we proposed a novel approach to constructing the worldwide ENF map by analyzing streaming data obtained by online multimedia services, such as "Youtube", "Earthcam", and "Ustream" instead of expensive specialized hardware. However, extracting accurate ENF from the streaming data is not a straightforward process because multimedia has its own noise and uncertainty. By applying several signal processing techniques, we can reduce noise and uncertainty, and improve the quality of the restored ENF. For the evaluation of this process, we compared the performance between the ENF signals restored by our proposed approach and collected by the frequency disturbance recorder (FDR) from FNET/GridEye. The experimental results show that our proposed approach outperforms in stable acquisition and management of the ENF signals compared to the conventional approach.