Biblio
This paper considers the problem of system-level fault diagnosis in highly dynamic networks. The existing fault diagnostic models deal mainly with static faults and have limited capabilities to handle dynamic networks. These fault diagnostic models are based on timers that work on a simple timeout mechanism to identify the node status, and often make simplistic assumptions for system implementations. To overcome the above problems, we propose a time-free comparison-based diagnostic model. Unlike the traditional models, the proposed model does not rely on timers and is more suitable for use in dynamic network environments. We also develop a novel comparison-based fault diagnosis protocol for identifying and diagnosing dynamic faults. The performance of the protocol has been analyzed and its correctness has been proved.
In this paper, we study information leakage by control planes of Software Defined Networks. We find that the response time of an OpenFlow control plane depends on its workload, and we develop an inference attack that an adversary with control of a single host could use to learn about network configurations without needing to compromise any network infrastructure (i.e. switches or controller servers). We also demonstrate that our inference attack works on real OpenFlow hardware. To our knowledge, no previous work has evaluated OpenFlow inference attacks outside of simulation.
Cloud platforms can leverage Trusted Platform Modules to help provide assurance to clients that cloud-based Web services are trustworthy and behave as expected. We discuss a variety of approaches to providing this assurance, and we implement one approach based on the concept of a trustworthy certificate authority. TaoCA, our prototype implementation, links cryptographic attestations from a cloud platform, including a Trusted Platform Module, with existing TLS-based authentication mechanisms. TaoCA is designed to enable certificate authorities, browser vendors, system administrators, and end users to define and enforce a range of trust policies for web services. Evaluation of the prototype implementation demonstrates the feasibility of the design, illustrates performance tradeoffs, and serves as an end-to-end, proof-of-concept evaluation of underlying trustworthy computing abstractions. The proposed approach can be deployed incrementally and provides new benefits while retaining compatibility with the existing public key infrastructure used for TLS.
We present the design and implementation of a trust-on-first-use (TOFU) policy for OpenPGP. When an OpenPGP user verifies a signature, TOFU checks that the signer used the same key as in the past. If not, this is a strong indicator that a key is a forgery and either the message is also a forgery or an active man-in-the-middle attack (MitM) is or was underway. That is, TOFU can proactively detect new attacks if the user had previously verified a message from the signer. And, it can reactively detect an attack if the signer gets a message through. TOFU cannot, however, protect against sustained MitM attacks. Despite this weakness, TOFU's practical security is stronger than the Web of Trust (WoT), OpenPGP's current trust policy, for most users. The problem with the WoT is that it requires too much user support. TOFU is also better than the most popular alternative, an X.509-based PKI, which relies on central servers whose certification processes are often sloppy. In this paper, we outline how TOFU can be integrated into OpenPGP; we address a number of potential attacks against TOFU; and, we show how TOFU can work alongside the WoT. Our implementation demonstrates the practicality of the approach.
We propose a novel, scalable, and principled graph sketching technique based on minwise hashing of local neighborhood. For an n-node graph with e-edges (e textgreatertextgreater n), we incrementally maintain in real-time a minwise neighbor sampled subgraph using k hash functions in O(n x k) memory, limit being user-configurable by the parameter k. Symmetrization and similarity based techniques can recover from these data structures a significant portion of the original graph. We present theoretical analysis of the minwise sampling strategy and also derive unbiased estimators for important graph properties such as triangle count and neighborhood overlap. We perform an extensive empirical evaluation of our graph sketch and it's derivatives on a wide variety of real-world graph data sets drawn from different application domains using important large network analysis algorithms: local and global clustering coefficient, PageRank, and local graph sparsification. With bounded memory, the quality of results using the sketch representation is competitive against baselines which use the full graph, and the computational performance is often better. Our framework is flexible and configurable to be leveraged by numerous other graph analytics algorithms, potentially reducing the information mining time on large streamed graphs for a variety of applications.
Securely pairing wearables with another device is the key to many promising applications, such as mobile payment, sensitive data transfer and secure interactions with smart home devices. This paper presents Touch-And-Guard (TAG), a system that uses hand touch as an intuitive manner to establish a secure connection between a wristband wearable and the touched device. It generates secret bits from hand resonant properties, which are obtained using accelerometers and vibration motors. The extracted secret bits are used by both sides to authenticate each other and then communicate confidentially. The ubiquity of accelerometers and motors presents an immediate market for our system. We demonstrate the feasibility of our system using an experimental prototype and conduct experiments involving 12 participants with 1440 trials. The results indicate that we can generate secret bits at a rate of 7.84 bit/s, which is 58% faster than conventional text input PIN authentication. We also show that our system is resistant to acoustic eavesdroppers in proximity.
SW Quality Assessment models are either too broad such as CMMI-DEV and SPICE that cover the full software development life cycle (SDLC), or too narrow such as TMMI and TPI that focus on testing. Quality Management as a main concern within the software industry is broader than the concept of testing. The V-Model sets a broader view with the concepts of Verification and Validation. Quality Assurance (QA) is another broader term that includes quality of processes. Configuration audits add more scope. In parallel there are some less visible dimensions in quality not often addressed in traditional models such as business alignment of QA efforts. This paper compares the commonly accepted models related to software quality management and proposes a model that fills an empty space in this area. The paper provides some analysis of the concepts of maturity and capability levels and provides some proposed adaptations for quality management assessment.
Embedded systems must address a multitude of potentially conflicting design constraints such as resiliency, energy, heat, cost, performance, security, etc., all in the face of highly dynamic operational behaviors and environmental conditions. By incorporating elements of intelligence, the hope is that the resulting “smart” embedded systems will function correctly and within desired constraints in spite of highly dynamic changes in the applications and the environment, as well as in the underlying software/hardware platforms. Since terms related to “smartness” (e.g., self-awareness, self-adaptivity, and autonomy) have been used loosely in many software and hardware computing contexts, we first present a taxonomy of “self-x” terms and use this taxonomy to relate major “smart” software and hardware computing efforts. A major attribute for smart embedded systems is the notion of self-awareness that enables an embedded system to monitor its own state and behavior, as well as the external environment, so as to adapt intelligently. Toward this end, we use a System-on-Chip perspective to show how the CyberPhysical System-on-Chip (CPSoC) exemplar platform achieves self-awareness through a combination of cross-layer sensing, actuation, self-aware adaptations, and online learning. We conclude with some thoughts on open challenges and research directions.
Internet has been being becoming the most famous and biggest communication networks as social, industrial, and public infrastructure since Internet was invented at late 1960s. In a historical retrospect of Internet's evolution, the Internet architecture continues evolution repeatedly by going through various technical challenges, for instance, in early 1990s, Internet had encountered danger of scalability, after a short while it had been overcome and successfully evolved by applying emerging techniques such as CIDR, NAT, and IPv6. Especially this paper emphasizes scalability issues as technical challenges with forecasting that Internet of things era has come. Firstly, we describe the Identifier and locator separation scheme that can achieve dramatically architectural evolution in historical perspective. Additionally, it reviews various kinds of Identifier and locator separation scheme because recently the scheme can be the major design pillar towards future of Internet architecture such as both various clean-slated future Internet architectures and evolving Internet architectures. Lastly we show a result of analysis by analysis table for future of internet of everything where number of Internet connected devices will growth to more than 20 billion by 2020.
This paper proposes a taxonomy of autonomous vehicle handover situations with a particular emphasis on situational awareness. It focuses on a number of research challenges such as: legal responsibility, the situational awareness level of the driver and the vehicle, the knowledge the vehicle must have of the driver's driving skills as well as the in-vehicle context. The taxonomy acts as a starting point for researchers and practitioners to frame the discussion on this complex problem.
In the near future, billions of new smart devices will connect the big network of the Internet of Things, playing an important key role in our daily life. Allowing IPv6 on the low-power resource constrained devices will lead research to focus on novel approaches that aim to improve the efficiency, security and performance of the 6LoWPAN adaptation layer. This work in progress paper proposes a hardware-based Network Packet Filtering (NPF) and an IPv6 Link-local address calculator which is able to filter the received IPv6 packets, offering nearly 18% overhead reduction. The goal is to obtain a System-on-Chip implementation that can be deployed in future IEEE 802.15.4 radio modules.
There are currently no requirements (technical or otherwise) that routing paths must be contained within national boundaries. Indeed, some paths experience international detours, i.e., originate in one country, cross international boundaries and return to the same country. In most cases these are sensible traffic engineering or peering decisions at ISPs that serve multiple countries. In some cases such detours may be suspicious. Characterizing international detours is useful to a number of players: (a) network engineers trying to diagnose persistent problems, (b) policy makers aiming at adhering to certain national communication policies, (c) entrepreneurs looking for opportunities to deploy new networks, or (d) privacy-conscious states trying to minimize the amount of internal communication traversing different jurisdictions. In this paper we characterize international detours in the Internet during the month of January 2016. To detect detours we sample BGP RIBs every 8 hours from 461 RouteViews and RIPE RIS peers spanning 30 countries. We use geolocation of ASes which geolocates each BGP prefix announced by each AS, mapping its presence at IXPs and geolocation infrastructure IPs. Finally, we analyze each global BGP RIB entry looking for detours. Our analysis shows more than 5K unique BGP prefixes experienced a detour. 132 prefixes experienced more than 50% of the detours. We observe about 544K detours. Detours either last for a few days or persist the entire month. Out of all the detours, more than 90% were transient detours that lasted for 72 hours or less. We also show different countries experience different characteristics of detours.
Traditionally, network and system configurations are static. Attackers have plenty of time to exploit the system's vulnerabilities and thus they are able to choose when to launch attacks wisely to maximize the damage. An unpredictable system configuration can significantly lift the bar for attackers to conduct successful attacks. Recent years, moving target defense (MTD) has been advocated for this purpose. An MTD mechanism aims to introduce dynamics to the system through changing its configuration continuously over time, which we call adaptations. Though promising, the dynamic system reconfiguration introduces overhead to the applications currently running in the system. It is critical to determine the right time to conduct adaptations and to balance the overhead afforded and the security levels guaranteed. This problem is known as the MTD timing problem. Little prior work has been done to investigate the right time in making adaptations. In this paper, we take the first step to both theoretically and experimentally study the timing problem in moving target defenses. For a broad family of attacks including DDoS attacks and cloud covert channel attacks, we model this problem as a renewal reward process and propose an optimal algorithm in deciding the right time to make adaptations with the objective of minimizing the long-term cost rate. In our experiments, both DDoS attacks and cloud covert channel attacks are studied. Simulations based on real network traffic traces are conducted and we demonstrate that our proposed algorithm outperforms known adaptation schemes.
The safety-critical aspects of cyber-physical systems motivate the need for rigorous analysis of these systems. In the literature this work is often done using idealized models of systems where the analysis can be carried out using high-level reasoning techniques such as Lyapunov functions and model checking. In this paper we present VERIDRONE, a foundational framework for reasoning about cyber-physical systems at all levels from high-level models to C code that implements the system. VERIDRONE is a library within the Coq proof assistant enabling us to build on its foundational implementation, its interactive development environments, and its wealth of libraries capturing interesting theories ranging from real numbers and differential equations to verified compilers and floating point numbers. These features make proof assistants in general, and Coq in particular, a powerful platform for unifying foundational results about safety-critical systems and ensuring interesting properties at all levels of the stack.
The safety-critical aspects of cyber-physical systems motivate the need for rigorous analysis of these systems. In the literature this work is often done using idealized models of systems where the analysis can be carried out using high-level reasoning techniques such as Lyapunov functions and model checking. In this paper we present VERIDRONE, a foundational framework for reasoning about cyber-physical systems at all levels from high-level models to C code that implements the system. VERIDRONE is a library within the Coq proof assistant enabling us to build on its foundational implementation, its interactive development environments, and its wealth of libraries capturing interesting theories ranging from real numbers and differential equations to verified compilers and floating point numbers. These features make proof assistants in general, and Coq in particular, a powerful platform for unifying foundational results about safety-critical systems and ensuring interesting properties at all levels of the stack.
Opportunistic Situation Identification (OSI) is new paradigms for situation-aware systems, in which contexts for situation identification are sensed through sensors that happen to be available rather than pre-deployed and application-specific ones. OSI extends the application usage scale and reduces system costs. However, designing and implementing OSI module of situation-aware systems encounters several challenges, including the uncertainty of context availability, vulnerable network connectivity and privacy threat. This paper proposes a novel middleware framework to tackle such challenges, and its intuition is that it facilitates performing the situation reasoning locally on a smartphone without needing to rely on the cloud, thus reducing the dependency on the network and being more privacy-preserving. To realize such intuitions, we propose a hybrid learning approach to maximize the reasoning accuracy using limited phone's storage space, with the combination of two the-state-the-art techniques. Specifically, this paper provides a genetic algorithm based optimization approach to determine which pre-computed models will be selected for storage under the storage constraints. Validation of the approach based on an open dataset indicates that the proposed approach achieves higher accuracy with comparatively small storage cost. Further, the proposed utility function for model selection performs better than three baseline utility functions.
In this work, we address the problem of designing and implementing honeypots for Industrial Control Systems (ICS). Honeypots are vulnerable systems that are set up with the intent to be probed and compromised by attackers. Analysis of those attacks then allows the defender to learn about novel attacks and general strategy of the attacker. Honeypots for ICS systems need to satisfy both traditional ICT requirements, such as cost and maintainability, and more specific ICS requirements, such as time and determinism. We propose the design of a virtual, high-interaction and server-based ICS honeypot to satisfy the requirements, and the deployment of a realistic, cost-effective, and maintainable ICS honeypot. An attacker model is introduced to complete the problem statement and requirements. Based on our design and the MiniCPS framework, we implemented a honeypot mimicking a water treatment testbed. To the best of our knowledge, the presented honeypot implementation is the first academic work targeting Ethernet/IP based ICS honeypots, the first ICS virtual honeypot that is high-interactive without the use of full virtualization technologies (such as a network of virtual machines), and the first ICS honeypot that can be managed with a Software-Defined Network (SDN) controller.
Defect-prediction techniques can enhance the quality assurance activities for software systems. For instance, they can be used to predict bugs in source files or functions. In the context of a software product line, such techniques could ideally be used for predicting defects in features or combinations of features, which would allow developers to focus quality assurance on the error-prone ones. In this preliminary case study, we investigate how defect prediction models can be used to identify defective features using machine-learning techniques. We adapt process metrics and evaluate and compare three classifiers using an open-source product line. Our results show that the technique can be effective. Our best scenario achieves an accuracy of 73 % for accurately predicting features as defective or clean using a Naive Bayes classifier. Based on the results we discuss directions for future work.
Testing a software product line such as Linux implies building the source with different configurations. Manual approaches to generate configurations that enable code of interest are doomed to fail due to the high amount of variation points distributed over the feature model, the build system and the source code. Research has proposed various approaches to generate covering configurations, but the algorithms show many drawbacks related to run-time, exhaustiveness and the amount of generated configurations. Hence, analyzing an entire Linux source can yield more than 30 thousand configurations and thereby exceeds the limited budget and resources for build testing. In this paper, we present an approach to fill the gap between a systematic generation of configurations and the necessity to fully build software in order to test it. By merging previously generated configurations, we reduce the number of necessary builds and enable global variability-aware testing. We reduce the problem of merging configurations to finding maximum cliques in a graph. We evaluate the approach on the Linux kernel, compare the results to common practices in industry, and show that our implementation scales even when facing graphs with millions of edges.
Software-Defined Networks (SDNs) promise to overcome the often complex and error-prone operation of tradi- tional computer networks, by enabling programmabil- ity, automation and verifiability. Yet, SDNs also in- troduce new challenges, for example due to the asyn- chronous communication channel between the logically centralized control platform and the switches in the data plane. In particular, the asynchronous commu- nication of network update commands (e.g., OpenFlow FlowMod messages) may lead to transient inconsisten- cies, such as loops or bypassed waypoints (e.g., fire- walls). One approach to ensure transient consistency even in asynchronous environments is to employ smart scheduling algorithms: algorithms which update subsets of switches in each communication round only, where each subset in itself guarantees consistency. In this demo, we show how to change routing policies in a transiently consistent manner. We demonstrate two al- gorithms, namely, Wayup [5] and Peacock [4], which partition the network updates sent from SDN controller towards OpenFlow software switches into multiple rounds as per respective algorithms. Later, the barrier mes- sages are utilized to ensure reliable network updates.
Smart contracts are programs that execute autonomously on blockchains. Their key envisioned uses (e.g. financial instruments) require them to consume data from outside the blockchain (e.g. stock quotes). Trustworthy data feeds that support a broad range of data requests will thus be critical to smart contract ecosystems. We present an authenticated data feed system called Town Crier (TC). TC acts as a bridge between smart contracts and existing web sites, which are already commonly trusted for non-blockchain applications. It combines a blockchain front end with a trusted hardware back end to scrape HTTPS-enabled websites and serve source-authenticated data to relying smart contracts. TC also supports confidentiality. It enables private data requests with encrypted parameters. Additionally, in a generalization that executes smart-contract logic within TC, the system permits secure use of user credentials to scrape access-controlled online data sources. We describe TC's design principles and architecture and report on an implementation that uses Intel's recently introduced Software Guard Extensions (SGX) to furnish data to the Ethereum smart contract system. We formally model TC and define and prove its basic security properties in the Universal Composibility (UC) framework. Our results include definitions and techniques of general interest relating to resource consumption (Ethereum's "gas" fee system) and TCB minimization. We also report on experiments with three example applications. We plan to launch TC soon as an online public service.
The incorporation of security mechanisms to protect spacecraft's TT&c; payload links is becoming a constant requirement in many space missions. More advanced mission concepts will allow spacecrafts to have higher levels of autonomy, which includes performing key management operations independently of control centers. This is especially beneficial to support missions operating distantly from Earth. In order to support such levels of autonomy, key agreement is one approach that allows spacecrafts to establish new cryptographic keys as they deem necessary. This work introduces an approach based on a trusted platform module that allows for key agreement to be performed with minimal computational efforts and protocol iterations. Besides, it allows for opportunistic control center reporting while avoiding man-in-the-middle and replay attacks.
Behavior-based tracking is an unobtrusive technique that allows observers to monitor user activities on the Internet over long periods of time – in spite of changing IP addresses. Previous work has employed supervised classifiers in order to link the sessions of individual users. However, classifiers need labeled training sessions, which are difficult to obtain for observers. In this paper we show how this limitation can be overcome with an unsupervised learning technique. We present a modified k-means algorithm and evaluate it on a realistic dataset that contains the Domain Name System (DNS) queries of 3,862 users. For this purpose, we simulate an observer that tries to track all users, and an Internet Service Provider that assigns a different IP address to every user on every day. The highest tracking accuracy is achieved within the subgroup of highly active users. Almost all sessions of 73% of the users in this subgroup can be linked over a period of 56 days. 19% of the highly active users can be traced completely, i.e., all their sessions are assigned to a single cluster. This fraction increases to 40% for shorter periods of seven days. As service providers may engage in behavior-based tracking to complement their existing profiling efforts, it constitutes a severe privacy threat for users of online services. Users can defend against behavior-based tracking by changing their IP address frequently, but this is cumbersome at the moment.
The IPv6 Routing Protocol for Low-Power and Lossy Networks was recently introduced as the new routing standard for the Internet of Things. Although RPL defines basic security modes, it remains vulnerable to topological attacks which facilitate blackholing, interception, and resource exhaustion. We are concerned with analyzing the corresponding threats and protecting future RPL deployments from such attacks. Our contributions are twofold. First, we analyze the state of the art, in particular the protective scheme VeRA and present two new rank order attacks as well as extensions to mitigate them. Second, we derive and evaluate TRAIL, a generic scheme for topology authentication in RPL. TRAIL solely relies on the basic assumptions of RPL that (1) the root node serves as a trust anchor and (2) each node interconnects to the root in a straight hierarchy. Using proper reachability tests, TRAIL scalably and reliably identifies any topological attacker without strong cryptographic efforts.
With the popularity of cloud computing, database outsourcing has been adopted by many companies. However, database owners may not 100% trust their database service providers. As a result, database privacy becomes a key issue for protecting data from the database service providers. Many researches have been conducted to address this issue, but few of them considered the simultaneous transparent support of existing DBMSs (Database Management Systems), applications and RADTs (Rapid Application Development Tools). A transparent framework based on accessing bridge and mobile app for protecting database privacy with PKI (Public Key Infrastructure) is, therefore, proposed to fill the blank. The framework uses PKI as its security base and encrypts sensitive data with data owners' public keys to protect data privacy. Mobile app is used to control private key and decrypt data, so that accessing sensitive data is completely controlled by data owners in a secure and independent channel. Accessing bridge utilizes database accessing middleware standard to transparently support existing DBMSs, applications and RADTs. This paper presents the framework, analyzes its transparency and security, and evaluates its performance via experiments.