Visible to the public Biblio

Found 1897 results

Filters: Keyword is compositionality  [Clear All Filters]
2018-05-16
Liu, M., Zhou, C., Tang, Q., Parhi, K. K., Kim, C. H..  2017.  A data remanence based approach to generate 100% stable keys from an SRAM physical unclonable function. 2017 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED). :1–6.

The start-up value of an SRAM cell is unique, random, and unclonable as it is determined by the inherent process mismatch between transistors. These properties make SRAM an attractive circuit for generating encryption keys. The primary challenge for SRAM based key generation, however, is the poor stability when the circuit is subject to random noise, temperature and voltage changes, and device aging. Temporal majority voting (TMV) and bit masking were used in previous works to identify and store the location of unstable or marginally stable SRAM cells. However, TMV requires a long test time and significant hardware resources. In addition, the number of repetitive power-ups required to find the most stable cells is prohibitively high. To overcome the shortcomings of TMV, we propose a novel data remanence based technique to detect SRAM cells with the highest stability for reliable key generation. This approach requires only two remanence tests: writing `1' (or `0') to the entire array and momentarily shutting down the power until a few cells flip. We exploit the fact that the cells that are easily flipped are the most robust cells when written with the opposite data. The proposed method is more effective in finding the most stable cells in a large SRAM array than a TMV scheme with 1,000 power-up tests. Experimental studies show that the 256-bit key generated from a 512 kbit SRAM using the proposed data remanence method is 100% stable under different temperatures, power ramp up times, and device aging.

2018-05-09
Tsujii, Y., Kawakita, K. E., Kumagai, M., Kikuchi, A., Watanabe, M..  2017.  State Estimation Error Detection System for Online Dynamic Security Assessment. 2017 IEEE Power Energy Society Innovative Smart Grid Technologies Conference (ISGT). :1–5.

Online Dynamic Security Assessment (DSA) is a dynamical system widely used for assessing and analyzing an electrical power system. The outcomes of DSA are used in many aspects of the operation of power system, from monitoring the system to determining remedial action schemes (e.g. the amount of generators to be shed at the event of a fault). Measurement from supervisory control and data acquisition (SCADA) and state estimation (SE) results are the inputs for online-DSA, however, the SE error, caused by sudden change in power flow or low convergence rate, could be unnoticed and skew the outcome. Therefore, generator shedding scheme cannot achieve optimum but must have some margin because we don't know how SE error caused by these problems will impact power system stability control. As a method for solving the problem, we developed SE error detection system (EDS), which is enabled by detecting the SE error that will impact power system transient stability. The method is comparing a threshold value and an index calculated by the difference between SE results and PMU observation data, using the distance from the fault point and the power flow value. Using the index, the reliability of the SE results can be verified. As a result, online-DSA can use the SE results while avoiding the bad SE results, assuring the outcome of the DSA assessment and analysis, such as the amount of generator shedding in order to prevent the power system's instability.

Vargas, C., Langfinger, M., Vogel-Heuser, B..  2017.  A Tiered Security Analysis of Industrial Control System Devices. 2017 IEEE 15th International Conference on Industrial Informatics (INDIN). :399–404.

The discussion of threats and vulnerabilities in Industrial Control Systems has gained popularity during the last decade due to the increase in interest and growing concern to secure these systems. In order to provide an overview of the complete landscape of these threats and vulnerabilities this contribution provides a tiered security analysis of the assets that constitute Industrial Control Systems. The identification of assets is obtained from a generalization of the system's architecture. Additionally, the security analysis is complemented by discussing security countermeasures and solutions that can be used to counteract the vulnerabilities and increase the security of control systems.

Hill, Zachary, Chen, Samuel, Wall, Donald, Papa, Mauricio, Hale, John, Hawrylak, Peter.  2017.  Simulation and Analysis Framework for Cyber-Physical Systems. Proceedings of the 12th Annual Conference on Cyber and Information Security Research. :7:1–7:4.

This paper describes a unified framework for the simulation and analysis of cyber physical systems (CPSs). The framework relies on the FreeBSD-based IMUNES network simulator. Components of the CPS are modeled as nodes within the IMUNES network simulator; nodes that communicate using real TCP/IP traffic. Furthermore, the simulated system can be exposed to other networks and the Internet to make it look like a real SCADA system. The frame-work has been used to simulate a TRIGA nuclear reactor. This is accomplished by creating nodes within the IMUNES network capable of running system modules simulating different CPS components. Nodes communicate using MODBUS/TCP, a widely used process control protocol. A goal of this work is to eventually integrate the simulator with a honeynet. This allows researchers to not only simulate a digital control system using real TCP/IP traffic to test control strategies and network topologies, but also to explore possible cyber attacks and mitigation strategies.

Markman, Chen, Wool, Avishai, Cardenas, Alvaro A..  2017.  A New Burst-DFA Model for SCADA Anomaly Detection. Proceedings of the 2017 Workshop on Cyber-Physical Systems Security and PrivaCy. :1–12.

In Industrial Control Systems (ICS/SCADA), machine to machine data traffic is highly periodic. Past work showed that in many cases, it is possible to model the traffic between each individual Programmable Logic Controller (PLC) and the SCADA server by a cyclic Deterministic Finite Automaton (DFA), and to use the model to detect anomalies in the traffic. However, a recent analysis of network traffic in a water facility in the U.S, showed that cyclic-DFA models have limitations. In our research, we examine the same data corpus; our study shows that the communication on all of the channels in the network is done in bursts of packets, and that the bursts have semantic meaning---the order within a burst depends on the messages. Using these observations, we suggest a new burst-DFA model that fits the data much better than previous work. Our model treats the traffic on each channel as a series of bursts, and matches each burst to the DFA, taking the burst's beginning and end into account. Our burst-DFA model successfully explains between 95% and 99% of the packets in the data-corpus, and goes a long way toward the construction of a practical anomaly detection system.

Formby, David, Walid, Anwar, Beyah, Raheem.  2017.  A Case Study in Power Substation Network Dynamics. Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems. :66–66.

The modern world is becoming increasingly dependent on computing and communication technology to function, but unfortunately its application and impact on areas such as critical infrastructure and industrial control system (ICS) networks remains to be thoroughly studied. Significant research has been conducted to address the myriad security concerns in these areas, but they are virtually all based on artificial testbeds or simulations designed on assumptions about their behavior either from knowledge of traditional IT networking or from basic principles of ICS operation. In this work, we provide the most detailed characterization of an example ICS to date in order to determine if these common assumptions hold true. A live power distribution substation is observed over the course of two and a half years to measure its behavior and evolution over time. Then, a horizontal study is conducted that compared this behavior with three other substations from the same company. Although most predictions were found to be correct, some unexpected behavior was observed that highlights the fundamental differences between ICS and IT networks including round trip times dominated by processing speed as opposed to network delay, several well known TCP features being largely irrelevant, and surprisingly large jitter from devices running real-time operating systems. The impact of these observations is discussed in terms of generality to other embedded networks, network security applications, and the suitability of the TCP protocol for this environment.

Alves, Thiago, Morris, Thomas, Yoo, Seong-Moo.  2017.  Securing SCADA Applications Using OpenPLC With End-To-End Encryption. Proceedings of the 3rd Annual Industrial Control System Security Workshop. :1–6.

During its nascent stages, Programmable Logic Controllers (PLC) were made robust to sustain tough industrial environments, but little care was taken to raise defenses against potential cyberthreats. The recent interconnectivity of legacy PLCs and SCADA systems with corporate networks and the internet has significantly increased the threats to critical infrastructure. To counter these threats, researchers have put their efforts in finding defense mechanisms that can protect the SCADA network and the PLCs. Encryption is a critical component of security and therefore has been used by many organizations to protect data on the network. However, since PLC vendors don't make available information about their hardware or software, it becomes challenging to embed encryption into their devices, especially if they rely on legacy protocols. This paper describes an alternative design using an open source PLC that was modified to encrypt all data it sends over the network, independently of the protocol used. Experimental results indicated that the encryption layer increased the security of the link without causing a significant overhead.

Korman, Matus, Välja, Margus, Björkman, Gunnar, Ekstedt, Mathias, Vernotte, Alexandre, Lagerström, Robert.  2017.  Analyzing the Effectiveness of Attack Countermeasures in a SCADA System. Proceedings of the 2Nd Workshop on Cyber-Physical Security and Resilience in Smart Grids. :73–78.

The SCADA infrastructure is a key component for power grid operations. Securing the SCADA infrastructure against cyber intrusions is thus vital for a well-functioning power grid. However, the task remains a particular challenge, not the least since not all available security mechanisms are easily deployable in these reliability-critical and complex, multi-vendor environments that host modern systems alongside legacy ones, to support a range of sensitive power grid operations. This paper examines how effective a few countermeasures are likely to be in SCADA environments, including those that are commonly considered out of bounds. The results show that granular network segmentation is a particularly effective countermeasure, followed by frequent patching of systems (which is unfortunately still difficult to date). The results also show that the enforcement of a password policy and restrictive network configuration including whitelisting of devices contributes to increased security, though best in combination with granular network segmentation.

Green, Benjamin, Krotofil, Marina, Abbasi, Ali.  2017.  On the Significance of Process Comprehension for Conducting Targeted ICS Attacks. Proceedings of the 2017 Workshop on Cyber-Physical Systems Security and PrivaCy. :57–67.

The exploitation of Industrial Control Systems (ICSs) has been described as both easy and impossible, where is the truth? PostStuxnet works have included a plethora of ICS focused cyber security research activities, with topics covering device maturity, network protocols, and overall cyber security culture. We often hear the notion of ICSs being highly vulnerable due to a lack of inbuilt security mechanisms, considered a low hanging fruit to a variety of low skilled threat actors. While there is substantial evidence to support such a notion, when considering targeted attacks on ICS, it is hard to believe an attacker with limited resources, such as a script kiddie or hacktivist, using publicly accessible tools and exploits alone, would have adequate knowledge and resources to achieve targeted operational process manipulation, while simultaneously evade detection. Through use of a testbed environment, this paper provides two practical examples based on a Man-In-The-Middle scenario, demonstrating the types of information an attacker would need obtain, collate, and comprehend, in order to begin targeted process manipulation and detection avoidance. This allows for a clearer view of associated challenges, and illustrate why targeted ICS exploitation might not be possible for every malicious actor.

Perry, David M., Mattavelli, Andrea, Zhang, Xiangyu, Cadar, Cristian.  2017.  Accelerating Array Constraints in Symbolic Execution. Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis. :68–78.

Despite significant recent advances, the effectiveness of symbolic execution is limited when used to test complex, real-world software. One of the main scalability challenges is related to constraint solving: large applications and long exploration paths lead to complex constraints, often involving big arrays indexed by symbolic expressions. In this paper, we propose a set of semantics-preserving transformations for array operations that take advantage of contextual information collected during symbolic execution. Our transformations lead to simpler encodings and hence better performance in constraint solving. The results we obtain are encouraging: we show, through an extensive experimental analysis, that our transformations help to significantly improve the performance of symbolic execution in the presence of arrays. We also show that our transformations enable the analysis of new code, which would be otherwise out of reach for symbolic execution.

Zhang, Xin, Si, Xujie, Naik, Mayur.  2017.  Combining the Logical and the Probabilistic in Program Analysis. Proceedings of the 1st ACM SIGPLAN International Workshop on Machine Learning and Programming Languages. :27–34.

Conventional program analyses have made great strides by leveraging logical reasoning. However, they cannot handle uncertain knowledge, and they lack the ability to learn and adapt. This in turn hinders the accuracy, scalability, and usability of program analysis tools in practice. We seek to address these limitations by proposing a methodology and framework for incorporating probabilistic reasoning directly into existing program analyses that are based on logical reasoning. We demonstrate that the combined approach can benefit a number of important applications of program analysis and thereby facilitate more widespread adoption of this technology.

Abate, Alessandro.  2017.  Formal Verification of Complex Systems: Model-Based and Data-Driven Methods. Proceedings of the 15th ACM-IEEE International Conference on Formal Methods and Models for System Design. :91–93.

Two known shortcomings of standard techniques in formal verification are the limited capability to provide system-level assertions, and the scalability to large, complex models, such as those needed in Cyber-Physical Systems (CPS) applications. Leveraging data, which nowadays is becoming ever more accessible, has the potential to mitigate such limitations. However, this leads to a lack of formal proofs that are needed for modern safety-critical systems. This contribution presents a research initiative that addresses these shortcomings by bringing model-based techniques and data-driven methods together, which can help pushing the envelope of existing algorithms and tools in formal verification and thus expanding their applicability to complex engineering systems, such as CPS. In the first part of the contribution, we discuss a new, formal, measurement-driven and model-based automated technique, for the quantitative verification of physical systems with partly unknown dynamics. We formulate this setup as a data-driven Bayesian inference problem, formally embedded within a quantitative, model-based verification procedure. We argue that the approach can be applied to complex physical systems that are key for CPS applications, dealing with spatially continuous variables, evolving under complex dynamics, driven by external inputs, and accessed under noisy measurements. In the second part of the contribution, we concentrate on systems represented by models that evolve under probabilistic and heterogeneous (continuous/discrete - that is "hybrid" - as well as nonlinear) dynamics. Such stochastic hybrid models (also known as SHS) are a natural mathematical framework for CPS. With focus on model-based verification procedures, we provide algorithms for quantitative model checking of temporal specifications on SHS with formal guarantees. This is attained via the development of formal abstraction techniques that are based on quantitative approximations. Theory is complemented by algorithms, all packaged in software tools that are available to users, and which are applied here in the domain of Smart Energy.

Gulzar, Muhammad Ali, Interlandi, Matteo, Han, Xueyuan, Li, Mingda, Condie, Tyson, Kim, Miryung.  2017.  Automated Debugging in Data-Intensive Scalable Computing. Proceedings of the 2017 Symposium on Cloud Computing. :520–534.

Developing Big Data Analytics workloads often involves trial and error debugging, due to the unclean nature of datasets or wrong assumptions made about data. When errors (e.g., program crash, outlier results, etc.) arise, developers are often interested in identifying a subset of the input data that is able to reproduce the problem. BigSift is a new faulty data localization approach that combines insights from automated fault isolation in software engineering and data provenance in database systems to find a minimum set of failure-inducing inputs. BigSift redefines data provenance for the purpose of debugging using a test oracle function and implements several unique optimizations, specifically geared towards the iterative nature of automated debugging workloads. BigSift improves the accuracy of fault localizability by several orders-of-magnitude ($\sim$103 to 107×) compared to Titian data provenance, and improves performance by up to 66× compared to Delta Debugging, an automated fault-isolation technique. For each faulty output, BigSift is able to localize fault-inducing data within 62% of the original job running time.

Zaostrovnykh, Arseniy, Pirelli, Solal, Pedrosa, Luis, Argyraki, Katerina, Candea, George.  2017.  A Formally Verified NAT. Proceedings of the Conference of the ACM Special Interest Group on Data Communication. :141–154.

We present a Network Address Translator (NAT) written in C and proven to be semantically correct according to RFC 3022, as well as crash-free and memory-safe. There exists a lot of recent work on network verification, but it mostly assumes models of network functions and proves properties specific to network configuration, such as reachability and absence of loops. Our proof applies directly to the C code of a network function, and it demonstrates the absence of implementation bugs. Prior work argued that this is not feasible (i.e., that verifying a real, stateful network function written in C does not scale) but we demonstrate otherwise: NAT is one of the most popular network functions and maintains per-flow state that needs to be properly updated and expired, which is a typical source of verification challenges. We tackle the scalability challenge with a new combination of symbolic execution and proof checking using separation logic; this combination matches well the typical structure of a network function. We then demonstrate that formally proven correctness in this case does not come at the cost of performance. The NAT code, proof toolchain, and proofs are available at [58].

Xu, Wen, Kashyap, Sanidhya, Min, Changwoo, Kim, Taesoo.  2017.  Designing New Operating Primitives to Improve Fuzzing Performance. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :2313–2328.

Fuzzing is a software testing technique that finds bugs by repeatedly injecting mutated inputs to a target program. Known to be a highly practical approach, fuzzing is gaining more popularity than ever before. Current research on fuzzing has focused on producing an input that is more likely to trigger a vulnerability. In this paper, we tackle another way to improve the performance of fuzzing, which is to shorten the execution time of each iteration. We observe that AFL, a state-of-the-art fuzzer, slows down by 24x because of file system contention and the scalability of fork() system call when it runs on 120 cores in parallel. Other fuzzers are expected to suffer from the same scalability bottlenecks in that they follow a similar design pattern. To improve the fuzzing performance, we design and implement three new operating primitives specialized for fuzzing that solve these performance bottlenecks and achieve scalable performance on multi-core machines. Our experiment shows that the proposed primitives speed up AFL and LibFuzzer by 6.1 to 28.9x and 1.1 to 735.7x, respectively, on the overall number of executions per second when targeting Google's fuzzer test suite with 120 cores. In addition, the primitives improve AFL's throughput up to 7.7x with 30 cores, which is a more common setting in data centers. Our fuzzer-agnostic primitives can be easily applied to any fuzzer with fundamental performance improvement and directly benefit large-scale fuzzing and cloud-based fuzzing services.

Dering, M. L., Tucker, C. S..  2017.  Generative Adversarial Networks for Increasing the Veracity of Big Data. 2017 IEEE International Conference on Big Data (Big Data). :2595–2602.

This work describes how automated data generation integrates in a big data pipeline. A lack of veracity in big data can cause models that are inaccurate, or biased by trends in the training data. This can lead to issues as a pipeline matures that are difficult to overcome. This work describes the use of a Generative Adversarial Network to generate sketch data, such as those that might be used in a human verification task. These generated sketches are verified as recognizable using a crowd-sourcing methodology, and finds that the generated sketches were correctly recognized 43.8% of the time, in contrast to human drawn sketches which were 87.7% accurate. This method is scalable and can be used to generate realistic data in many domains and bootstrap a dataset used for training a model prior to deployment.

Hamouda, R. Ben, Hafaiedh, I. Ben.  2017.  Formal Modeling and Verification of a Wireless Body Area Network (WBAN) Protocol: S-TDMA Protocol. 2017 International Conference on Internet of Things, Embedded Systems and Communications (IINTEC). :72–77.

WBANs integrate wearable and implanted devices with wireless communication and information processing systems to monitor the well-being of an individual. Various MAC (Medium Access Control) protocols with different objectives have been proposed for WBANs. The fact that any flaw in these critical systems may lead to the loss of one's life implies that testing and verifying MAC's protocols for such systems are on the higher level of importance. In this paper, we firstly propose a high-level formal and scalable model with timing aspects for a MAC protocol particularly designed for WBANs, named S-TDMA (Statistical frame based TDMA protocol). The protocol uses TDMA (Time Division Multiple Access) bus arbitration, which requires temporal aspect modeling. Secondly, we propose a formal validation of several relevant properties such as deadlock freedom, fairness and mutual exclusion of this protocol at a high level of abstraction. The protocol was modeled using a composition of timed automata components, and verification was performed using a real-time model checker.

Zhao, Zhiqiang, Feng, Z..  2017.  A Spectral Graph Sparsification Approach to Scalable Vectorless Power Grid Integrity Verification. 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC). :1–6.

Vectorless integrity verification is becoming increasingly critical to robust design of nanoscale power delivery networks (PDNs). To dramatically improve efficiency and capability of vectorless integrity verifications, this paper introduces a scalable multilevel integrity verification framework by leveraging a hierarchy of almost linear-sized spectral power grid sparsifiers that can well retain effective resistances between nodes, as well as a recent graph-theoretic algebraic multigrid (AMG) algorithmic framework. As a result, vectorless integrity verification solution obtained on coarse level problems can effectively help find the solution of the original problem. Extensive experimental results show that the proposed vectorless verification framework can always efficiently and accurately obtain worst-case scenarios in even very large power grid designs.

Lokananta, F., Hartono, D., Tang, C. M..  2017.  A Scalable and Reconfigurable Verification and Benchmark Environment for Network on Chip Architecture. 2017 4th International Conference on New Media Studies (CONMEDIA). :6–10.

To reduce the complex communication problem that arise as the number of on-chip component increases, the use of Network-on-Chip (NoC) as interconnection architectures have become more promising to solve complex on-chip communication problems. However, providing a suitable test base to measure and verify functionality of any NoC is a compulsory. Universal Verification Methodology (UVM) is introduced as a standardized and reusable methodology for verifying integrated circuit design. In this research, a scalable and reconfigurable verification and benchmark environment for NoC is proposed.

2018-04-11
Assadi, Sepehr, Khanna, Sanjeev.  2017.  Randomized Composable Coresets for Matching and Vertex Cover. Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures. :3–12.

A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say k, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and $\backslash$emph\round-optimal\ (requiring essentially no interaction between the machines) protocol. Some well-known examples of techniques for creating summaries include sampling, linear sketching, and composable coresets. These techniques have been successfully used to design communication efficient solutions for many fundamental graph problems. However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a modest approximation factor of $\backslash$polylog\(n)\ requires using representative summaries of size $\backslash$widetilde\$\backslash$Omega\(ntextasciicircum2) i.e. essentially no better summary exists than each machine simply sending its entire input graph. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit $\backslash$emph\randomized composable coresets\ of size $\backslash$widetildeØ\(n) that yield an $\backslash$widetildeØ\(1)-approximate solution$\backslash$footnote\Here and throughout the paper, we use $\backslash$Ot($\backslash$cdot) notation to suppress $\backslash$polylog\(n)\ factors, where n is the number of vertices in the graph. In other words, a small subgraph of the input graph at each machine can be identified as its representative summary and the final answer then is obtained by simply running any maximum matching or minimum vertex cover algorithm on these combined subgraphs. This results in an Õ(1)-approximation simultaneous protocol for these problems with Õ(nk) total communication when the input is randomly partitioned across k machines. We also prove our results are optimal in a very strong sense: we not only rule out existence of smaller randomized composable coresets for these problems but in fact show that our $\backslash$Ot(nk) bound for total communication is optimal for em any simultaneous communication protocol (i.e. not only for randomized coresets) for these two problems. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication, improving the previous best known round complexity for these problems.\vphantom\

Li, Jason, O'Donnell, Ryan.  2017.  Bounding Laconic Proof Systems by Solving CSPs in Parallel. Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures. :95–100.

We show that the basic semidefinite programming relaxation value of any constraint satisfaction problem can be computed in NC; that is, in parallel polylogarithmic time and polynomial work. As a complexity-theoretic consequence we get that $\backslash$MIPone[k,c,s] $\backslash$subseteq $\backslash$PSPACE provided s/c $\backslash$leq (.62-o(1))k/2textasciicircumk, resolving a question of Austrin, H$\backslash$aa stad, and Pass. Here $\backslash$MIPone[k,c,s] is the class of languages decidable with completeness c and soundness s by an interactive proof system with k provers, each constrained to communicate just 1 bit.

Hoang, Thang, Ozkaptan, Ceyhun D., Yavuz, Attila A., Guajardo, Jorge, Nguyen, Tam.  2017.  S3ORAM: A Computation-Efficient and Constant Client Bandwidth Blowup ORAM with Shamir Secret Sharing. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :491–505.

Oblivious Random Access Machine (ORAM) enables a client to access her data without leaking her access patterns. Existing client-efficient ORAMs either achieve O(log N) client-server communication blowup without heavy computation, or O(1) blowup but with expensive homomorphic encryptions. It has been shown that O(log N) bandwidth blowup might not be practical for certain applications, while schemes with O(1) communication blowup incur even more delay due to costly homomorphic operations. In this paper, we propose a new distributed ORAM scheme referred to as Shamir Secret Sharing ORAM (S3ORAM), which achieves O(1) client-server bandwidth blowup and O(1) blocks of client storage without relying on costly partial homomorphic encryptions. S3ORAM harnesses Shamir Secret Sharing, tree-based ORAM structure and a secure multi-party multiplication protocol to eliminate costly homomorphic operations and, therefore, achieves O(1) client-server bandwidth blowup with a high computational efficiency. We conducted comprehensive experiments to assess the performance of S3ORAM and its counterparts on actual cloud environments, and showed that S3ORAM achieves three orders of magnitude lower end-to-end delay compared to alternatives with O(1) client communication blowup (Onion-ORAM), while it is one order of magnitude faster than Path-ORAM for a network with a moderate bandwidth quality. We have released the implementation of S3ORAM for further improvement and adaptation.

Harkanson, R., Kim, Y..  2017.  Applications of Elliptic Curve Cryptography: A Light Introduction to Elliptic Curves and a Survey of Their Applications. Proceedings of the 12th Annual Conference on Cyber and Information Security Research. :6:1–6:7.

Elliptic curve cryptography (ECC) is a relatively newer form of public key cryptography that provides more security per bit than other forms of cryptography still being used today. We explore the mathematical structure and operations of elliptic curves and how those properties make curves suitable tools for cryptography. A brief historical context is given followed by the safety of usage in production, as not all curves are free from vulnerabilities. Next, we compare ECC with other popular forms of cryptography for both key exchange and digital signatures, in terms of security and speed. Traditional applications of ECC, both theoretical and in-practice, are presented, including key exchange for web browser usage and DNSSEC. We examine multiple uses of ECC in a mobile context, including cellular phones and the Internet of Things. Modern applications of curves are explored, such as iris recognition, RFID, smart grid, as well as an application for E-health. Finally, we discuss how ECC stacks up in a post-quantum cryptography world.

Alderman, James, Crampton, Jason, Farley, Naomi.  2017.  A Framework for the Cryptographic Enforcement of Information Flow Policies. Proceedings of the 22Nd ACM on Symposium on Access Control Models and Technologies. :143–154.

It is increasingly common to outsource data storage to untrusted, third party (e.g. cloud) servers. However, in such settings, low-level online reference monitors may not be appropriate for enforcing read access, and thus cryptographic enforcement schemes (CESs) may be required. Much of the research on cryptographic access control has focused on the use of specific primitives and, primarily, on how to generate appropriate keys and fails to model the access control system as a whole. Recent work in the context of role-based access control has shown a gap between theoretical policy specification and computationally secure implementations of access control policies, potentially leading to insecure implementations. Without a formal model, it is hard to (i) reason about the correctness and security of a CES, and (ii) show that the security properties of a particular cryptographic primitive are sufficient to guarantee security of the CES as a whole. In this paper, we provide a rigorous definitional framework for a CES that enforces read-only information flow policies (which encompass many practical forms of access control, including role-based policies). This framework (i) provides a tool by which instantiations of CESs can be proven correct and secure, (ii) is independent of any particular cryptographic primitives used to instantiate a CES, and (iii) helps to identify the limitations of current primitives (e.g. key assignment schemes) as components of a CES.

Zhang, Hao, Zhang, Tao, Chen, Huajin.  2017.  Variance Analysis of Pixel-Value Differencing Steganography. Proceedings of the 2017 International Conference on Cryptography, Security and Privacy. :28–32.

As the adaptive steganography selects edge and texture area for loading, the theoretical analysis is limited by modeling difficulty. This paper introduces a novel method to study pixel-value difference (PVD) embedding scheme. First, the difference histogram values of cover image are used as parameters, and a variance formula for PVD stego noise is obtained. The accuracy of this formula has been verified through analysis with standard pictures. Second, the stego noise is divided into six kinds of pixel regions, and the regional noise variances are utilized to compare the security between PVD and least significant bit matching (LSBM) steganography. A mathematical conclusion is presented that, with the embedding capacity less than 2.75 bits per pixel, PVD is always not safer than LSBM under the same embedding rate, regardless of region selection. Finally, 10000 image samples are used to observe the validity of mathematical conclusion. For most images and regions, the data are also shown to be consistent with the prior judgment. Meanwhile, the cases of exception are analyzed seriously, and are found to be caused by randomness of pixel selection and abandoned blocks in PVD scheme. In summary, the unity of theory and practice completely indicates the effectiveness of our new method.