Biblio
In this paper, we study the sensor placement problem in urban water networks that maximizes the localization of pipe failures given that some sensors give incorrect outputs. False output of a sensor might be the result of degradation in sensor's hardware, software fault, or might be due to a cyber attack on the sensor. Incorrect outputs from such sensors can have any possible values which could lead to an inaccurate localization of a failure event. We formulate the optimal sensor placement problem with erroneous sensors as a set multicover problem, which is NP-hard, and then discuss a polynomial time heuristic to obtain efficient solutions. In this direction, we first examine the physical model of the disturbance propagating in the network as a result of a failure event, and outline the multi-level sensing model that captures several event features. Second, using a combinatorial approach, we solve the problem of sensor placement that maximizes the localization of pipe failures by selecting m sensors out of which at most e give incorrect outputs. We propose various localization performance metrics, and numerically evaluate our approach on a benchmark and a real water distribution network. Finally, using computational experiments, we study relationships between design parameters such as the total number of sensors, the number of sensors with errors, and extracted signal features.
Hardware Malware Detectors (HMDs) have recently been proposed as a defense against the proliferation of malware. These detectors use low-level features, that can be collected by the hardware performance monitoring units on modern CPUs to detect malware as a computational anomaly. Several aspects of the detector construction have been explored, leading to detectors with high accuracy. In this paper, we explore the question of how well evasive malware can avoid detection by HMDs. We show that existing HMDs can be effectively reverse-engineered and subsequently evaded, allowing malware to hide from detection without substantially slowing it down (which is important for certain types of malware). This result demonstrates that the current generation of HMDs can be easily defeated by evasive malware. Next, we explore how well a detector can evolve if it is exposed to this evasive malware during training. We show that simple detectors, such as logistic regression, cannot detect the evasive malware even with retraining. More sophisticated detectors can be retrained to detect evasive malware, but the retrained detectors can be reverse-engineered and evaded again. To address these limitations, we propose a new type of Resilient HMDs (RHMDs) that stochastically switch between different detectors. These detectors can be shown to be provably more difficult to reverse engineer based on resent results in probably approximately correct (PAC) learnability theory. We show that indeed such detectors are resilient to both reverse engineering and evasion, and that the resilience increases with the number and diversity of the individual detectors. Our results demonstrate that these HMDs offer effective defense against evasive malware at low additional complexity.
Constraints such as separation-of-duty are widely used to specify requirements that supplement basic authorization policies. However, the existence of constraints (and authorization policies) may mean that a user is unable to fulfill her/his organizational duties because access to resources is denied. In short, there is a tension between the need to protect resources (using policies and constraints) and the availability of resources. Recent work on workflow satisfiability and resiliency in access control asks whether this tension compromises the ability of an organization to achieve its objectives. In this paper, we develop a new method of specifying constraints which subsumes much related work and allows a wider range of constraints to be specified. The use of such constraints leads naturally to a range of questions related to "policy existence", where a positive answer means that an organization's objectives can be realized. We provide an overview of our results establishing that some policy existence questions, notably for those instances that are restricted to user-independent constraints, are fixed-parameter tractable.
We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+â2â2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2âε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2â2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2â2/k+δ)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1âε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP).
Some lattice-based public key cryptosystems allow one to transform ciphertext from one lattice or ring representation to another efficiently and without knowledge of public and private keys. In this work we explore this lattice transformation property from cryptographic engineering viewpoint. We apply ciphertext transformation to compress Ring-LWE ciphertexts and to enable efficient decryption on an ultra-lightweight implementation targets such as Internet of Things, Smart Cards, and RFID applications. Significantly, this can be done without modifying the original encryption procedure or its security parameters. Such flexibility is unique to lattice-based cryptography and may find additional, unique real-life applications. Ciphertext compression can significantly increase the probability of decryption errors. We show that the frequency of such errors can be analyzed, measured and used to derive precise failure bounds for n-bit error correction. We introduce XECC, a fast multi-error correcting code that allows constant time implementation in software. We use these tools to construct and explore TRUNC8, a concrete Ring-LWE encryption and authentication system. We analyze its implementation, security, and performance. We show that our lattice compression technique reduces ciphertext size by more than 40% at equivalent security level, while also enabling public key cryptography on previously unreachable ultra-lightweight platforms. The experimental public key encryption and authentication system has been implemented on an 8-bit AVR target, where it easily outperforms elliptic curve and RSA-based proposals at similar security level. Similar results have been obtained with a Cortex M0 implementation. The new decryption code requires only a fraction of the software footprint of previous Ring-LWE implementations with the same encryption parameters, and is well suited for hardware implementation.
This research aims to design a hardware random number generator running on wireless identification and sensing platform (WISP), which is a lightweight Internet of things device. The accelerometer sensor on WISP is used as the entropy source. This entropy source is post-processed with de-biasing and extraction methods to provide more uniformly distributed results that can be used in the authentication protocols between a radio frequency identification (RFID) tag and an RFID reader. The obtained random number outputs are tested using the well-known NIST random number test suite. It is seen that the numbers pass all the tests in the NIST randomness test suite.
The Radio Frequency Identification (RFID), as one of the key technologies in sensing layer of the Internet of Things (IoT) framework, has increasingly been deployed in a wide variety of application domains. But the reliability of RFID is still a great concern. This article introduces the group management of RFID passwords method, come up with by YUICHI KOBAYASHI and other researchers, which aimed to reduce the risk of privacy disclosure. But for reason that the password and pass key in the method, which are set to protect the ID, doesn't change and the ID is transmitted directly in the unsafe channel, it causes serious vulnerabilities that may be used by resourceful adversary. Thus, we proposed an improved method by using the random number to encrypt the password and switching the password into the temporally valid information. Besides, the protocol encrypts the ID during to avoid the direct transmission situation significantly increases the reliability.
On battery-free IoT devices such as passive RFID tags, it is extremely difficult, if not impossible, to run cryptographic algorithms. Hence physical-layer identification methods are proposed to validate the authenticity of passive tags. However no existing physical-layer authentication method of RFID tags that can defend against the signal replay attack. This paper presents Hu-Fu, a new direction and the first solution of physical layer authentication that is resilient to the signal replay attack, based on the fact of inductive coupling of two adjacent tags. We present the theoretical model and system workflow. Experiments based on our implementation using commodity devices show that Hu-Fu is effective for physical-layer authentication.
RFID Grouping proof convinces an offline verifier that multiple tags are simultaneously scanned. Various solutions have been proposed but most of them have security and privacy vulnerabilities. In this paper, we propose an elliptic-curve-based RFID grouping proof protocol. Our protocol is proven secure and narrow-strong private. We also demonstrate that our grouping proof can be batch verified to improve the efficiency for large-scale RFID systems and it is suitable for low-cost RFID tags.
There are many everyday situations in which users need to enter their user identification (user ID), such as logging in to computer systems and entering secure offices. In such situations, contactless passive IC cards are convenient because users can input their user ID simply by passing the card over a reader. However, these cards cannot be used for successive interactions. To address this issue, we propose AccelTag, a contactless IC card equipped with an acceleration sensor and a liquid crystal display (LCD). AccelTag utilizes high-function RFID technology so that the acceleration sensor and the LCD can also be driven by a wireless power supply. With its built-in acceleration sensor, AccelTag can acquire its direction and movement when it is waved over the reader. We demonstrate several applications using AccelTag, such as displaying several types of information in the card depending on the user's requirements.
PUFs are an emerging security primitive that offers a lightweight security alternative to highly constrained devices like RFIDs. PUFs used in authentication protocols however suffer from unreliable outputs. This hinders their scaling, which is necessary for increased security, and makes them also problematic to use with cryptographic functions. We introduce a new Dual Arbiter PUF design that reveals additional information concerning the stability of the outputs. We then employ a novel filtering scheme that discards unreliable outputs with a minimum number of evaluations, greatly reducing the BER of the PUF.
In this paper, a distributed architecture for the implementation of smart city has been proposed to facilitate various smart features like solid waste management, efficient urban mobility and public transport, smart parking, robust IT connectivity, safety and security of citizens and a roadmap for achieving it. How massive volume of IoT data can be analyzed and a layered architecture of IoT is explained. Why data integration is important for analyzing and processing of data collected by the different smart devices like sensors, actuators and RFIDs is discussed. The wireless sensor network can be used to sense the data from various locations but there has to be more to it than stuffing sensors everywhere for everything. Why only the sensor is not sufficient for data collection and how human beings can be used to collect data is explained. There is some communication protocols between the volunteers engaged in collecting data to restrict the sharing of data and ensure that the target area is covered with minimum numbers of volunteers. Every volunteer should cover some predefined area to collect data. Then the proposed architecture model is having one central server to store all data in a centralized server. The data processing and the processing of query being made by the user is taking place in centralized server.
The Internet of Things (IoT) is the latest Internet evolution that interconnects billions of devices, such as cameras, sensors, RFIDs, smart phones, wearable devices, ODBII dongles, etc. Federations of such IoT devices (or things) provides the information needed to solve many important problems that have been too difficult to harness before. Despite these great benefits, privacy in IoT remains a great concern, in particular when the number of things increases. This presses the need for the development of highly scalable and computationally efficient mechanisms to prevent unauthorised access and disclosure of sensitive information generated by things. In this paper, we address this need by proposing a lightweight, yet highly scalable, data obfuscation technique. For this purpose, a digital watermarking technique is used to control perturbation of sensitive data that enables legitimate users to de-obfuscate perturbed data. To enhance the scalability of our solution, we also introduce a contextualisation service that achieve real-time aggregation and filtering of IoT data for large number of designated users. We, then, assess the effectiveness of the proposed technique by considering a health-care scenario that involves data streamed from various wearable and stationary sensors capturing health data, such as heart-rate and blood pressure. An analysis of the experimental results that illustrate the unconstrained scalability of our technique concludes the paper.
Software Defined Networking (SDN) is an emerging paradigm that changes the way networks are managed by separating the control plane from data plane and making networks programmable. The separation brings about flexibility, automation, orchestration and offers savings in both capital and operational expenditure. Despite all the advantages offered by SDN it introduces new threats that did not exist before or were harder to exploit in traditional networks, making network penetration potentially easier. One of the key threat to SDN is the authentication and authorisation of network applications that control network behaviour (unlike the traditional network where network devices like routers and switches are autonomous and run proprietary software and protocols to control the network). This paper proposes a mechanism that helps the control layer authenticate network applications and set authorisation permissions that constrict manipulation of network resources.
NEtwork MObility (NEMO) has gained recently a lot of attention from a number of standardization and researches committees. Although NEMO-Basic Support Protocol (NEMO-BSP) seems to be suitable in the context of the Intelligent Transport Systems (ITS), it has several shortcomings, such as packets loss and lack of security, since it is a host-based mobility scheme. Therefore, in order to improve handoff performance and solve these limitations, schemes adapting Proxy MIPv6 for NEMO have been appeared. But the majorities did not deal with the case of the handover of the Visiting Mobile Nodes (VMN) located below the Mobile Router (MR). Thus, this paper proposes a Visiting Mobile Node Authentication Protocol for Proxy MIPv6-Based NEtwork MObility which ensures strong authentication between entities. To evaluate the security performance of our proposition, we have used the AVISPA/SPAN software which guarantees that our proposed protocol is a safe scheme.
Mixed-Criticality Systems (MCS) are real-time systems characterized by two or more distinct levels of criticality. In MCS, it is imperative that high-critical flows meet their deadlines while low critical flows can tolerate some delays. Sharing resources between flows in Network-On-Chip (NoC) can lead to different unpredictable latencies and subsequently complicate the implementation of MCS in many-core architectures. This paper proposes a new virtual channel router designed for MCS deployed over NoCs. The first objective of this router is to reduce the worst-case communication latency of high-critical flows. The second aim is to improve the network use rate and reduce the communication latency for low-critical flows. The proposed router, called DAS (Double Arbiter and Switching router), jointly uses Wormhole and Store And Forward techniques for low and high-critical flows respectively. Simulations with a cycle-accurate SystemC NoC simulator show that, with a 15% network use rate, the communication delay of high-critical flows is reduced by 80% while communication delay of low-critical flow is increased by 18% compared to usual solutions based on routers with multiple virtual channels.
Decoy routing is an emerging approach for censorship circumvention in which circumvention is implemented with help from a number of volunteer Internet autonomous systems, called decoy ASes. Recent studies on decoy routing consider all decoy routing systems to be susceptible to a fundamental attack – regardless of their specific designs–in which the censors re-route traffic around decoy ASes, thereby preventing censored users from using such systems. In this paper, we propose a new architecture for decoy routing that, by design, is significantly stronger to rerouting attacks compared to all previous designs. Unlike previous designs, our new architecture operates decoy routers only on the downstream traffic of the censored users; therefore we call it downstream-only decoy routing. As we demonstrate through Internet-scale BGP simulations, downstream-only decoy routing offers significantly stronger resistance to rerouting attacks, which is intuitively because a (censoring) ISP has much less control on the downstream BGP routes of its traffic. Designing a downstream-only decoy routing system is a challenging engineering problem since decoy routers do not intercept the upstream traffic of censored users. We design the first downstream-only decoy routing system, called Waterfall, by devising unique covert communication mechanisms. We also use various techniques to make our Waterfall implementation resistant to traffic analysis attacks. We believe that downstream-only decoy routing is a significant step towards making decoy routing systems practical. This is because a downstream-only decoy routing system can be deployed using a significantly smaller number of volunteer ASes, given a target resistance to rerouting attacks. For instance, we show that a Waterfall implementation with only a single decoy AS is as resistant to routing attacks (against China) as a traditional decoy system (e.g., Telex) with 53 decoy ASes.
It is well-known that online services resort to various cookies to track users through users' online service identifiers (IDs) - in other words, when users access online services, various "fingerprints" are left behind in the cyberspace. As they roam around in the physical world while accessing online services via mobile devices, users also leave a series of "footprints" – i.e., hints about their physical locations - in the physical world. This poses a potent new threat to user privacy: one can potentially correlate the "fingerprints" left by the users in the cyberspace with "footprints" left in the physical world to infer and reveal leakage of user physical world privacy, such as frequent user locations or mobility trajectories in the physical world - we refer to this problem as user physical world privacy leakage via user cyberspace privacy leakage. In this paper we address the following fundamental question: what kind - and how much - of user physical world privacy might be leaked if we could get hold of such diverse network datasets even without any physical location information. In order to conduct an in-depth investigation of these questions, we utilize the network data collected via a DPI system at the routers within one of the largest Internet operator in Shanghai, China over a duration of one month. We decompose the fundamental question into the three problems: i) linkage of various online user IDs belonging to the same person via mobility pattern mining; ii) physical location classification via aggregate user mobility patterns over time; and iii) tracking user physical mobility. By developing novel and effective methods for solving each of these problems, we demonstrate that the question of user physical world privacy leakage via user cyberspace privacy leakage is not hypothetical, but indeed poses a real potent threat to user privacy.
Motivated by recent attacks like the Australian census website meltdown in 2016, this paper proposes a system for high-level specification and synthesis of intents for Geo-Blocking and IP Spoofing protection at a Software Defined Interconnect. In contrast to todays methods that use expensive custom hardware and/or manual configuration, our solution allows the operator to specify high-level intents, which are automatically compiled to flow-level rules and pushed into the interconnect fabric. We define a grammar for specifying the security policies, and a compiler for converting these to connectivity rules. We prototype our system on the open-source ONOS Controller platform, demonstrate its functionality in a multi-domain SDN fabric interconnecting legacy border routers, and evaluate its performance and scalability in blocking DDoS attacks.