Biblio
The Internet-of-Things designates the interconnection of a variety of communication-enabled physical objects. IoT systems and devices must operate with a deterministic behavior and respect user-defined system goals in any situation. We thus defined hybrid controller synthesis for decentralized and critical IoT systems relying on a set of rules to handle situations with asynchronous and synchronous event processing. This framework defines a declarative rule-driven governance mechanism of locally synchronous sub-systems enabling the hybrid control of IoT systems with formal guarantees over the satisfaction of system-wide QoS requirements. In order to prove the practicality of our framework, it was applied to a critical medical Internet-of-Things use case, demonstrating its usability for safety-critical IoT applications.
As the Internet technology develops rapidly, attacks against Tor networks becomes more and more frequent. So, it's more and more difficult for Tor network to meet people's demand to protect their private information. A method to improve the anonymity of Tor seems urgent. In this paper, we mainly talk about the principle of Tor, which is the largest anonymous communication system in the world, analyze the reason for its limited efficiency, and discuss the vulnerability of link fingerprint and node selection. After that, a node recognition model based on SVM is established, which verifies that the traffic characteristics expose the node attributes, thus revealing the link and destroying the anonymity. Based on what is done above, some measures are put forward to improve Tor protocol to make it more anonymous.
Assertions are helpful in program analysis, such as software testing and verification. The most challenging part of automatically recommending assertions is to design the assertion patterns and to insert assertions in proper locations. In this paper, we develop Weak-Assert, a weakness-oriented assertion recommendation toolkit for program analysis of C code. A weakness-oriented assertion is an assertion which can help to find potential program weaknesses. Weak-Assert uses well-designed patterns to match the abstract syntax trees of source code automatically. It collects significant messages from trees and inserts assertions into proper locations of programs. These assertions can be checked by using program analysis techniques. The experiments are set up on Juliet test suite and several actual projects in Github. Experimental results show that Weak-Assert helps to find 125 program weaknesses in 26 actual projects. These weaknesses are confirmed manually to be triggered by some test cases.
A key exchange protocol is an important primitive in the field of information and network security and is used to exchange a common secret key among various parties. A number of key exchange protocols exist in the literature and most of them are based on the Diffie-Hellman (DH) problem. But, these DH type protocols cannot resist to the modern computing technologies like quantum computing, grid computing etc. Therefore, a more powerful non-DH type key exchange protocol is required which could resist the quantum and exponential attacks. In the year 2013, Lei and Liao, thus proposed a lattice-based key exchange protocol. Their protocol was related to the NTRU-ENCRYPT and NTRU-SIGN and so, was referred as NTRU-KE. In this paper, we identify that NTRU-KE lacks the authentication mechanism and suffers from the man-in-the-middle (MITM) attack. This attack may lead to the forging the authenticated users and exchanging the wrong key.
Industrial control systems (ICS) are systems used in critical infrastructures for supervisory control, data acquisition, and industrial automation. ICS systems have complex, component-based architectures with many different hardware, software, and human factors interacting in real time. Despite the importance of security concerns in industrial control systems, there has not been a comprehensive study that examined common security architectural weaknesses in this domain. Therefore, this paper presents the first in-depth analysis of 988 vulnerability advisory reports for Industrial Control Systems developed by 277 vendors. We performed a detailed analysis of the vulnerability reports to measure which components of ICS have been affected the most by known vulnerabilities, which security tactics were affected most often in ICS and what are the common architectural security weaknesses in these systems. Our key findings were: (1) Human-Machine Interfaces, SCADA configurations, and PLCs were the most affected components, (2) 62.86% of vulnerability disclosures in ICS had an architectural root cause, (3) the most common architectural weaknesses were “Improper Input Validation”, followed by “Im-proper Neutralization of Input During Web Page Generation” and “Improper Authentication”, and (4) most tactic-related vulnerabilities were related to the tactics “Validate Inputs”, “Authenticate Actors” and “Authorize Actors”.
Symbolic methods have been used extensively for proving security of cryptographic protocols in the Dolev-Yao model, and more recently for proving security of cryptographic primitives and constructions in the computational model. However, existing methods for proving security of cryptographic constructions in the computational model often require significant expertise and interaction, or are fairly limited in scope and expressivity. This paper introduces a symbolic approach for proving security of cryptographic constructions based on the Learning With Errors assumption (Regev, STOC 2005). Such constructions are instances of lattice-based cryptography and are extremely important due to their potential role in post-quantum cryptography. Following (Barthe, Grégoire and Schmidt, CCS 2015), our approach combines a computational logic and deducibility problems—a standard tool for representing the adversary's knowledge, the Dolev-Yao model. The computational logic is used to capture (indistinguishability-based) security notions and drive the security proofs whereas deducibility problems are used as side-conditions to control that rules of the logic are applied correctly. We then use AutoLWE, an implementation of the logic, to deliver very short or even automatic proofs of several emblematic constructions, including CPA-PKE (Gentry et al., STOC 2008), (Hierarchical) Identity-Based Encryption (Agrawal et al. Eurocrypt 2010), Inner Product Encryption (Agrawal et al. Asiacrypt 2011), CCA-PKE (Micciancio et al., Eurocrypt 2012). The main technical novelty beyond AutoLWE is a set of (semi-)decision procedures for deducibility problems, using extensions of Gröbner basis computations for subalgebras in the (non-)commutative setting (instead of ideals in the commutative setting). Our procedures cover the theory of matrices, which is required for lattice-based assumption, as well as the theory of non-commutative rings, fields, and Diffie-Hellman exponentiation, in its standard, bilinear and multilinear forms. Additionally, AutoLWE supports oracle-relative assumptions, which are used specifically to apply (advanced forms of) the Leftover Hash Lemma, an information-theoretical tool widely used in lattice-based proofs.
Zero-knowledge SNARKs (zk-SNARKs) are non-interactive proof systems with short and efficiently verifiable proofs. They elegantly resolve the juxtaposition of individual privacy and public trust, by providing an efficient way of demonstrating knowledge of secret information without actually revealing it. To this day, zk-SNARKs are being used for delegating computation, electronic cryptocurrencies, and anonymous credentials. However, all current SNARKs implementations rely on pre-quantum assumptions and, for this reason, are not expected to withstand cryptanalitic efforts over the next few decades. In this work, we introduce the first designated-verifier zk-SNARK based on lattice assumptions, which are believed to be post-quantum secure. We provide a generalization in the spirit of Gennaro et al. (Eurocrypt'13) to the SNARK of Danezis et al. (Asiacrypt'14) that is based on Square Span Programs (SSPs) and relies on weaker computational assumptions. We focus on designated-verifier proofs and propose a protocol in which a proof consists of just 5 LWE encodings. We provide a concrete choice of parameters as well as extensive benchmarks on a C implementation, showing that our construction is practically instantiable.
Nuclear weapons have re-emerged as one the main global security challenges of our time. Any further reductions in the nuclear arsenals will have to rely on robust verification mechanisms. This requires, in particular, trusted measurement systems to confirm the authenticity of nuclear warheads based on their radiation signatures. These signatures are considered extremely sensitive information, and inspection systems have to be designed to protect them. To accomplish this task, so-called information barriers" have been proposed. These devices process sensitive information acquired during an inspection, but only display results in a pass/fail manner. Traditional inspection systems rely on complex electronics both for data acquisition and processing. Several research efforts have produced prototype systems, but after almost thirty years of research and development, no viable and widely accepted system has emerged. This talk highlights recent efforts to overcome this impasse. A first approach is to avoid electronics in critical parts of the measurement process altogether and to rely instead on physical phenomena to detect radiation and to confirm a unique fingerprint of the inspected warhead using a zero-knowledge protocol. A second approach is based on a radiation detection system using vintage electronics built around a 6502 processor. Hardware designed in the distant past, at a time when its use for sensitive measurements was never envisioned, may drastically reduce concerns that another party implemented backdoors or hidden switches. Sensitive information is only stored on traditional punched cards. The talk concludes with a roadmap and highlights opportunities for researchers from the hardware security community to make critical contributions to nuclear arms control and global security in the years ahead.
In Vehicle-to-Vehicle (V2V) communications, malicious actors may spread false information to undermine the safety and efficiency of the vehicular traffic stream. Thus, vehicles must determine how to respond to the contents of messages which maybe false even though they are authenticated in the sense that receivers can verify contents were not tampered with and originated from a verifiable transmitter. Existing solutions to find appropriate actions are inadequate since they separately address trust and decision, require the honest majority (more honest ones than malicious), and do not incorporate driver preferences in the decision-making process. In this work, we propose a novel trust-aware decision-making framework without requiring an honest majority. It securely determines the likelihood of reported road events despite the presence of false data, and consequently provides the optimal decision for the vehicles. The basic idea of our framework is to leverage the implied effect of the road event to verify the consistency between each vehicle's reported data and actual behavior, and determine the data trustworthiness and event belief by integrating the Bayes' rule and Dempster Shafer Theory. The resulting belief serves as inputs to a utility maximization framework focusing on both safety and efficiency. This framework considers the two basic necessities of the Intelligent Transportation System and also incorporates drivers' preferences to decide the optimal action. Simulation results show the robustness of our framework under the multiple-vehicle attack, and different balances between safety and efficiency can be achieved via selecting appropriate human preference factors based on the driver's risk-taking willingness.
False alarm and miss are two general kinds of alarm errors and they can decrease operator's trust in the alarm system. Specifically, there are two different forms of trust in such systems, represented by two kinds of responses to alarms in this research. One is compliance and the other is reliance. Besides false alarm and miss, the two responses are differentially affected by properties of the alarm system, situational factors or operator factors. However, most of the existing studies have qualitatively analyzed the relationship between a single variable and the two responses. In this research, all available experimental studies are identified through database searches using keyword "compliance and reliance" without restriction on year of publication to December 2017. Six relevant studies and fifty-two sets of key data are obtained as the data base of this research. Furthermore, neural network is adopted as a tool to establish the quantitative relationship between multiple factors and the two forms of trust, respectively. The result will be of great significance to further study the influence of human decision making on the overall fault detection rate and the false alarm rate of the human machine system.
In the present paper, the problem of networked control system (NCS) cyber security is considered. The geometric approach is used to evaluate the security and vulnerability level of the controlled system. The proposed results are about the so-called false data injection attacks and show how imperfectly known disturbances can be used to perform undetectable, or at least stealthy, attacks that can make the NCS vulnerable to attacks from malicious outsiders. A numerical example is given to illustrate the approach.
From the last few years, security in wireless sensor network (WSN) is essential because WSN application uses important information sharing between the nodes. There are large number of issues raised related to security due to open deployment of network. The attackers disturb the security system by attacking the different protocol layers in WSN. The standard AODV routing protocol faces security issues when the route discovery process takes place. The data should be transmitted in a secure path to the destination. Therefore, to support the process we have proposed a trust based intrusion detection system (NL-IDS) for network layer in WSN to detect the Black hole attackers in the network. The sensor node trust is calculated as per the deviation of key factor at the network layer based on the Black hole attack. We use the watchdog technique where a sensor node continuously monitors the neighbor node by calculating a periodic trust value. Finally, the overall trust value of the sensor node is evaluated by the gathered values of trust metrics of the network layer (past and previous trust values). This NL-IDS scheme is efficient to identify the malicious node with respect to Black hole attack at the network layer. To analyze the performance of NL-IDS, we have simulated the model in MATLAB R2015a, and the result shows that NL-IDS is better than Wang et al. [11] as compare of detection accuracy and false alarm rate.
We present attacks that use only the volume of responses to range queries to reconstruct databases. Our focus is on practical attacks that work for large-scale databases with many values and records, without requiring assumptions on the data or query distributions. Our work improves on the previous state-of-the-art due to Kellaris et al. (CCS 2016) in all of these dimensions. Our main attack targets reconstruction of database counts and involves a novel graph-theoretic approach. It generally succeeds when R , the number of records, exceeds \$N2/2\$, where N is the number of possible values in the database. For a uniform query distribution, we show that it requires volume leakage from only O(N2 łog N) queries (cf. O(N4łog N) in prior work). We present two ancillary attacks. The first identifies the value of a new item added to a database using the volume leakage from fresh queries, in the setting where the adversary knows or has previously recovered the database counts. The second shows how to efficiently recover the ranges involved in queries in an online fashion, given an auxiliary distribution describing the database. Our attacks are all backed with mathematical analyses and extensive simulations using real data.
Information-centric networking (ICN) is a Future Internet paradigm which uses named information (data objects) instead of host-based end-to-end communications. In-network caching is a key pillar of ICN. Basically, data objects are cached in ICN routers and retrieved from these network elements upon availability when they are requested. It is a particularly promising networking approach due to the expected benefits of data dissemination efficiency, reduced delay and improved robustness for challenging communication scenarios in IoT domain. From the security perspective, ICN concentrates on securing data objects instead of ensuring the security of end-to-end communication link. However, it inherently involves the security challenge of access control for content. Thus, an efficient access control mechanism is crucial to provide secure information dissemination. In this work, we investigate Attribute Based Encryption (ABE) as an access control apparatus for information centric IoT. Moreover, we elaborate on how such a system performs for different parameter settings such as different numbers of attributes and file sizes.
We propose using memristor-based TCAMs (Ternary Content Addressable Memory) to accelerate Regular Expression (RegEx) matching. RegEx matching is a key function in network security, where deep packet inspection finds and filters out malicious actors. However, RegEx matching latency and power can be incredibly high and current proposals are challenged to perform wire-speed matching for large scale rulesets. Our approach dramatically decreases RegEx matching operating power, provides high throughput, and the use of mTCAMs enables novel compression techniques to expand ruleset sizes and allows future exploitation of the multi-state (analog) capabilities of memristors. We fabricated and demonstrated nanoscale memristor TCAM cells. SPICE simulations investigate mTCAM performance at scale and a mTCAM power model at 22nm demonstrates 0.2 fJ/bit/search energy for a 36x400 mTCAM. We further propose a tiled architecture which implements a Snort ruleset and assess the application performance. Compared to a state-of-the-art FPGA approach (2 Gbps,\textbackslashtextasciitilde1W), we show x4 throughput (8 Gbps) at 60% the power (0.62W) before applying standard TCAM power-saving techniques. Our performance comparison improves further when striding (searching multiple characters) is considered, resulting in 47.2 Gbps at 1.3W for our approach compared to 3.9 Gbps at 630mW for the strided FPGA NFA, demonstrating a promising path to wire-speed RegEx matching on large scale rulesets.
Nowadays, Information Technology is one of the important parts of human life and also of organizations. Organizations face problems such as IT problems. To solve these problems, they have to improve their security sections. Thus there is a need for security assessments within organizations to ensure security conditions. The use of security standards and general metric can be useful for measuring the safety of an organization; however, it should be noted that the general metric which are applied to businesses in general cannot be effective in this particular situation. Thus it's important to select metric standards for different businesses to improve both cost and organizational security. The selection of suitable security measures lies in the use of an efficient way to identify them. Due to the numerous complexities of these metric and the extent to which they are defined, in this paper that is based on comparative study and the benchmarking method, taxonomy for security measures is considered to be helpful for a business to choose metric tailored to their needs and conditions.
Critical infrastructures have suffered from different kind of cyber attacks over the years. Many of these attacks are performed using malwares by exploiting the vulnerabilities of these resources. Smart power grid is one of the major victim which suffered from these attacks and its SCADA system are frequently targeted. In this paper we describe our proposed framework to analyze smart power grid, while its SCADA system is under attack by malware. Malware propagation and its effects on SCADA system is the focal point of our analysis. OMNeT++ simulator and openDSS is used for developing and analyzing the simulated smart power grid environment.
Security is one of the most important properties of electric power system (EPS). We consider the state estimation (SE) tool as a barrier to the corruption of data on current operating conditions of the EPS. An algorithm for a two-level SE on the basis of SCADA and WAMS measurements is effective in terms of detection of malicious attacks on energy system. The article suggests a methodology to identify cyberattacks on SCADA and WAMS.
Advent of Cyber has converted the entire World into a Global village. But, due to vurneabilites in SCADA architecture [1] national assests are more prone to cyber attacks.. Cyber invasions have a catastrophic effect in the minds of the civilian population, in terms of states security system. A robust cyber security is need of the hour to protect the critical information infastructrue & critical infrastructure of a country. Here, in this paper we scrutinize cyber terrorism, vurneabilites in SCADA network systems [1], [2] and concept of cyber resilience to combat cyber attacks.
The pervasive use of databases for the storage of critical and sensitive information in many organizations has led to an increase in the rate at which databases are exploited in computer crimes. While there are several techniques and tools available for database forensic analysis, such tools usually assume an apriori database preparation, such as relying on tamper-detection software to already be in place and the use of detailed logging. Further, such tools are built-in and thus can be compromised or corrupted along with the database itself. In practice, investigators need forensic and security audit tools that work on poorlyconfigured systems and make no assumptions about the extent of damage or malicious hacking in a database.In this paper, we present our database forensics methods, which are capable of examining database content from a storage (disk or RAM) image without using any log or file system metadata. We describe how these methods can be used to detect security breaches in an untrusted environment where the security threat arose from a privileged user (or someone who has obtained such privileges). Finally, we argue that a comprehensive and independent audit framework is necessary in order to detect and counteract threats in an environment where the security breach originates from an administrator (either at database or operating system level).
The IETF's push towards standardizing the Manufacturer Usage Description (MUD) grammar and mechanism for specifying IoT device behavior is gaining increasing interest from industry. The ability to control inappropriate communication between devices in the form of access control lists (ACLs) is expected to limit the attack surface on IoT devices; however, little is known about how MUD policies will get enforced in operational networks, and how they will interact with current and future intrusion detection systems (IDS). We believe this paper is the first attempt to translate MUD policies into flow rules that can be enforced using SDN, and in relating exception behavior to attacks that can be detected via off-the-shelf IDS. Our first contribution develops and implements a system that translates MUD policies to flow rules that are proactively configured into network switches, as well as reactively inserted based on run-time bindings of DNS. We use traces of 28 consumer IoT devices taken over several months to evaluate the performance of our system in terms of switch flow-table size and fraction of exception traffic that needs software inspection. Our second contribution identifies the limitations of flow-rules derived from MUD in protecting IoT devices from internal and external network attacks, and we show how our system is able to detect such volumetric attacks (including port scanning, TCP/UDP/ICMP flooding, ARP spoofing, and TCP/SSDP/SNMP reflection) by sending only a very small fraction of exception packets to off-the-shelf IDS.
Using Software-defined Networks in wide area (SDN-WAN) has been strongly emerging in the past years. Due to scalability and economical reasons, SDN-WAN mostly uses an in-band control mechanism, which implies that control and data sharing the same critical physical links. However, the in-band control and centralized control architecture can be exploited by attackers to launch distributed denial of service (DDoS) on SDN control plane by flooding the shared links and/or the Open flow agents. Therefore, constructing a resilient software designed network requires dynamic isolation and distribution of the control flow to minimize damage and significantly increase attack cost. Existing solutions fall short to address this challenge because they require expensive extra dedicated resources or changes in OpenFlow protocol. In this paper, we propose a moving target technique called REsilient COntrol Network architecture (ReCON) that uses the same SDN network resources to defend SDN control plane dynamically against the DDoS attacks. ReCON essentially, (1) minimizes the sharing of critical resources among data and control traffic, and (2) elastically increases the limited capacity of the software control agents on-demand by dynamically using the under-utilized resources from within the same SDN network. To implement a practical solution, we formalize ReCON as a constraints satisfaction problem using Satisfiability Modulo Theory (SMT) to guarantee a correct-by-construction control plan placement that can handle dynamic network conditions.
An abundance of data in many disciplines of science, engineering, national security, health care, and business has led to the emerging field of Big Data Analytics that run in a cloud computing environment. To process massive quantities of data in the cloud, developers leverage Data-Intensive Scalable Computing (DISC) systems such as Google's MapReduce, Hadoop, and Spark. Currently, developers do not have easy means to debug DISC applications. The use of cloud computing makes application development feel more like batch jobs and the nature of debugging is therefore post-mortem. Developers of big data applications write code that implements a data processing pipeline and test it on their local workstation with a small sample data, downloaded from a TB-scale data warehouse. They cross fingers and hope that the program works in the expensive production cloud. When a job fails or they get a suspicious result, data scientists spend hours guessing at the source of the error, digging through post-mortem logs. In such cases, the data scientists may want to pinpoint the root cause of errors by investigating a subset of corresponding input records. The vision of my work is to provide interactive, real-time and automated debugging services for big data processing programs in modern DISC systems with minimum performance impact. My work investigates the following research questions in the context of big data analytics: (1) What are the necessary debugging primitives for interactive big data processing? (2) What scalable fault localization algorithms are needed to help the user to localize and characterize the root causes of errors? (3) How can we improve testing efficiency during iterative development of DISC applications by reasoning the semantics of dataflow operators and user-defined functions used inside dataflow operators in tandem? To answer these questions, we synthesize and innovate ideas from software engineering, big data systems, and program analysis, and coordinate innovations across the software stack from the user-facing API all the way down to the systems infrastructure.