Biblio
In this paper, we claim that cyclic obfuscation, when properly implemented, poses exponential complexity on SAT or CycSAT attack. The CycSAT, in order to generate the necessary cycle avoidance clauses, uses a pre-processing step. We show that this pre-processing step has to compose its cycle avoidance condition on all cycles in a netlist, otherwise, a missing cycle could trap the SAT solver in an infinite loop or force it to return an incorrect key. Then, we propose several techniques by which the number of cycles is exponentially increased with respect to the number of inserted feedbacks. We further illustrate that when the number of feedbacks is increased, the pre-processing step of CycSAT faces an exponential increase in complexity and runtime, preventing the correct composition of loop avoidance clauses in a reasonable time before invoking the SAT solver. On the other hand, if the pre-processing is not completed properly, the SAT solver will get stuck or return incorrect key. Hence, when the cyclic obfuscation in accordance to the conditions proposed in this paper is implemented, it would impose an exponential complexity with respect to the number of inserted feedback, even when the CycSAT solution is used.
The integration of subset sum in the verifiable secret sharing scheme provides added security measure for a multiparty computation such as immediate identification of and removal of an imposter, avoidance or discourages man-in-the-middle attack and lattice-based attack, and lessens dealer's burden on processing monitoring the integrity of shareholders. This study focuses on the security assessment of a brute-force attack on the subset sum-based verifiable secret sharing scheme. With the simulation done using a generator of all possible fixed-length partition (which is k=3 as the least possible) summing up to the sum of the original subset generated by the dealer, it shows that it will already took 11,408 years to brute-force all possible values even on a small 32-bit-length value and 3.8455 years for a 128-bit length value thus proving that the resiliency on brute attack on the subset sum based VSSS can be discounted despite simplicity of the implementation. Zero knowledge on the number of threshold will also multiply to the impossibility of the brute force attack.
Symbolic methods have been used extensively for proving security of cryptographic protocols in the Dolev-Yao model, and more recently for proving security of cryptographic primitives and constructions in the computational model. However, existing methods for proving security of cryptographic constructions in the computational model often require significant expertise and interaction, or are fairly limited in scope and expressivity. This paper introduces a symbolic approach for proving security of cryptographic constructions based on the Learning With Errors assumption (Regev, STOC 2005). Such constructions are instances of lattice-based cryptography and are extremely important due to their potential role in post-quantum cryptography. Following (Barthe, Grégoire and Schmidt, CCS 2015), our approach combines a computational logic and deducibility problems—a standard tool for representing the adversary's knowledge, the Dolev-Yao model. The computational logic is used to capture (indistinguishability-based) security notions and drive the security proofs whereas deducibility problems are used as side-conditions to control that rules of the logic are applied correctly. We then use AutoLWE, an implementation of the logic, to deliver very short or even automatic proofs of several emblematic constructions, including CPA-PKE (Gentry et al., STOC 2008), (Hierarchical) Identity-Based Encryption (Agrawal et al. Eurocrypt 2010), Inner Product Encryption (Agrawal et al. Asiacrypt 2011), CCA-PKE (Micciancio et al., Eurocrypt 2012). The main technical novelty beyond AutoLWE is a set of (semi-)decision procedures for deducibility problems, using extensions of Gröbner basis computations for subalgebras in the (non-)commutative setting (instead of ideals in the commutative setting). Our procedures cover the theory of matrices, which is required for lattice-based assumption, as well as the theory of non-commutative rings, fields, and Diffie-Hellman exponentiation, in its standard, bilinear and multilinear forms. Additionally, AutoLWE supports oracle-relative assumptions, which are used specifically to apply (advanced forms of) the Leftover Hash Lemma, an information-theoretical tool widely used in lattice-based proofs.
With the exponential growth of Ubiquitous Computing, multiple technologies have gained prominence. One of them is the technology of Wireless Sensor Networks (WSNs). Increasingly used in fields such as smart houses and e-health, it can be said that the sensors have a consolidated room in the current scenario. These sensors, however, have some shortcomings: limited resources, energy and computing power are points of interest. Besides these, there is also concern about the vulnerability of these devices, both physical and logical. To eliminate or at least ameliorating these threats is necessary to create layers of protection. One of the layers is formed by Intrusion Detection Systems (IDS). However, sensors have limited computational capacity, and the development of IDSs for these devices must take into account this constraint. Other important requirements for an Intrusion Detection System are flexibility, efficiency and the ability to adapt to new situations. A tool that enables such capabilities are the Intelligent Agents. With this in mind, this work describes the proposal of a framework for intrusion detection in WSNs based on intelligent agents.
The exponential increase of Internet of Things (IoT) devices have resulted in a range of new and unanticipated vulnerabilities associated with their use. IoT devices from smart homes to smart enterprises can easily be compromised. One of the major problems associated with the IoT is maintaining security; the vulnerable nature of IoT devices poses a challenge to many aspects of security, including security testing and analysis. It is trivial to perform the security analysis for IoT devices to understand the loop holes and very nature of the devices itself. Given these issues, there has been less emphasis on security testing and analysis of the IoT. In this paper, we show our preliminary efforts in the area of security analysis for IoT devices and introduce a security IoT testbed for performing security analysis. We also discuss the necessary design, requirements and the architecture to support our security analysis conducted via the proposed testbed.
Cyber attacks occur on a near daily basis and are becoming exponentially more common. While some research aims to detect the characteristics of an attack, little focus has been given to patterns of attacks in general. This paper aims to exploit temporal correlations between the number of attacks per day in order to predict future intensity of cyber incidents. Through analysis of attack data collected from Hackmageddon, correlation was found among reported attack volume in consecutive days. This paper presents a forecasting system that aims to predict the number of cyber attacks on a given day based only on a set of historical attack count data. Our system conducts ARIMA time series forecasting on all previously collected incidents to predict the expected number of attacks on a future date. Our tool is able to use only a subset of data relevant to a specific attack method. Prediction models are dynamically updated over time as new data is collected to improve accuracy. Our system outperforms naive forecasting methods by 14.1% when predicting attacks of any type, and up to 21.2% when forecasting attacks of a specific type. Our system also produces a model which more accurately predicts future cyber attack intensity behavior.
Virtualization of Internet of Things(IoT) is a concept of dynamically building customized high-level IoT services which rely on the real time data streams from low-level physical IoT sensors. Security in IoT virtualization is challenging, because with the growing number of available (building block) services, the number of personalizable virtual services grows exponentially. This paper proposes Service Object Capability(SOC) ticket system, a decentralized access control mechanism between servers and clients to efficiently authenticate and authorize each other without using public key cryptography. SOC supports decentralized partial delegation of capabilities specified in each server/client ticket. Unlike PKI certificates, SOC's authentication time and handshake packet overhead stays constant regardless of each capability's delegation hop distance from the root delegator. The paper compares SOC's security benefifits with Kerberos and the experimental results show SOC's authentication incurs significantly less time packet overhead compared against those from other mechanisms based on RSA-PKI and ECC-PKI algorithms. SOC is as secure as, and more efficient and suitable for IoT environments, than existing PKIs and Kerberos.
Most architectures are designed to mitigate the usually undesirable phenomenon of device wearout. We take a contrarian view and harness this phenomenon to create hardware security mechanisms that resist attacks by statistically enforcing an upper bound on hardware uses, and consequently attacks. For example, let us assume that a user may log into a smartphone a maximum of 50 times a day for 5 years, resulting in approximately 91,250 legitimate uses. If we assume at least 8-character passwords and we require login (and retrieval of the storage decryption key) to traverse hardware that wears out in 91,250 uses, then an adversary has a negligible chance of successful brute-force attack before the hardware wears out, even assuming real-world password cracking by professionals. M-way replication of our hardware and periodic re-encryption of storage can increase the daily usage bound by a factor of M. The key challenge is to achieve practical statistical bounds on both minimum and maximum uses for an architecture, given that individual devices can vary widely in wearout characteristics. We introduce techniques for architecturally controlling these bounds and perform a design space exploration for three use cases: a limited-use connection, a limited-use targeting system and one-time pads. These techniques include decision trees, parallel structures, Shamir's secret-sharing mechanism, Reed-Solomon codes, and module replication. We explore the cost in area, energy and latency of using these techniques to achieve system-level usage targets given device-level wearout distributions. With redundant encoding, for example, we can improve exponential sensitivity to device lifetime variation to linear sensitivity, reducing the total number of NEMS devices by 4 orders of magnitude to about 0.8 million for limited-use connections (compared with 4 billion if without redundant encoding).
How to construct good linear codes is an important problem in coding theory. This paper considers the construction of linear codes from functions with two variables, presents a class of two-weight and three-weight ternary linear codes and employs the Gauss sums and exponential sums to determine the parameters and weight distribution of these codes. Linear codes with few weights have applications in consumer electronics, communication and date storage systems. Linear codes with two weights have applications in strongly regular graphs and linear codes with three weights can be applied in association schemes.
PAKE protocols, for Password-Authenticated Key Exchange, enable two parties to establish a shared cryptographically strong key over an insecure network using a short common secret as authentication means. After the seminal work by Bellovin and Merritt, with the famous EKE, for Encrypted Key Exchange, various settings and security notions have been defined, and many protocols have been proposed. In this paper, we revisit the promising SPEKE, for Simple Password Exponential Key Exchange, proposed by Jablon. The only known security analysis works in the random oracle model under the CDH assumption, but in the multiplicative groups of finite fields only (subgroups of Zp*), which means the use of large elements and so huge communications and computations. Our new instantiation (TBPEKE, for Two-Basis Password Exponential Key Exchange) applies to any group, and our security analysis requires a DLIN-like assumption to hold. In particular, one can use elliptic curves, which leads to a better efficiency, at both the communication and computation levels. We additionally consider server corruptions, which immediately leak all the passwords to the adversary with symmetric PAKE. We thus study an asymmetric variant, also known as VPAKE, for Verifier-based Password Authenticated Key Exchange. We then propose a verifier-based variant of TBPEKE, the so-called VTBPEKE, which is also quite efficient, and resistant to server-compromise.
Group exponentiation is an important operation used in many public-key cryptosystems and, more generally, cryptographic protocols. To expand the applicability of these solutions to computationally weaker devices, it has been advocated that this operation is outsourced from a computationally weaker client to a computationally stronger server, possibly implemented in a cloud-based architecture. While preliminary solutions to this problem considered mostly honest servers, or multiple separated servers, some of which honest, solving this problem in the case of a single (logical), possibly malicious, server, has remained open since a formal cryptographic model was introduced in [20]. Several later attempts either failed to achieve privacy or only bounded by a constant the (security) probability that a cheating server convinces a client of an incorrect result. In this paper we solve this problem for a large class of cyclic groups, thus making our solutions applicable to many cryptosystems in the literature that are based on the hardness of the discrete logarithm problem or on related assumptions. Our main protocol satisfies natural correctness, security, privacy and efficiency requirements, where the security probability is exponentially small. In our main protocol, with very limited offline computation and server computation, the client can delegate an exponentiation to an exponent of the same length as a group element by performing an exponentiation to an exponent of short length (i.e., the length of a statistical parameter). We also show an extension protocol that further reduces client computation by a constant factor, while increasing offline computation and server computation by about the same factor.
The semiconductor industry is fully globalized and integrated circuits (ICs) are commonly defined, designed and fabricated in different premises across the world. This reduces production costs, but also exposes ICs to supply chain attacks, where insiders introduce malicious circuitry into the final products. Additionally, despite extensive post-fabrication testing, it is not uncommon for ICs with subtle fabrication errors to make it into production systems. While many systems may be able to tolerate a few byzantine components, this is not the case for cryptographic hardware, storing and computing on confidential data. For this reason, many error and backdoor detection techniques have been proposed over the years. So far all attempts have been either quickly circumvented, or come with unrealistically high manufacturing costs and complexity. This paper proposes Myst, a practical high-assurance architecture, that uses commercial off-the-shelf (COTS) hardware, and provides strong security guarantees, even in the presence of multiple malicious or faulty components. The key idea is to combine protective-redundancy with modern threshold cryptographic techniques to build a system tolerant to hardware trojans and errors. To evaluate our design, we build a Hardware Security Module that provides the highest level of assurance possible with COTS components. Specifically, we employ more than a hundred COTS secure cryptocoprocessors, verified to FIPS140-2 Level 4 tamper-resistance standards, and use them to realize high-confidentiality random number generation, key derivation, public key decryption and signing. Our experiments show a reasonable computational overhead (less than 1% for both Decryption and Signing) and an exponential increase in backdoor-tolerance as more ICs are added.
Symmetry ergodic matrices exponentiation (SEME) problem is to find x, given CxMDx, where C and D are the companion matrices of primitive polynomials and M is an invertible matrix over finite field. This paper proposes a new zero-knowledge identification scheme based on SEME problem. It is perfect zero-knowledge for honest verifiers. The scheme could provide a candidate cryptographic primitive in post quantum cryptography. Due to its simplicity and naturalness, low-memory, low-computation costs, the proposed scheme is suitable for using in computationally limited devices for identification such as smart cards.
Current algorithms for context-free parsing inflict a trade-off between ease of understanding, ease of implementation, theoretical complexity, and practical performance. No algorithm achieves all of these properties simultaneously. Might et al. introduced parsing with derivatives, which handles arbitrary context-free grammars while being both easy to understand and simple to implement. Despite much initial enthusiasm and a multitude of independent implementations, its worst-case complexity has never been proven to be better than exponential. In fact, high-level arguments claiming it is fundamentally exponential have been advanced and even accepted as part of the folklore. Performance ended up being sluggish in practice, and this sluggishness was taken as informal evidence of exponentiality. In this paper, we reexamine the performance of parsing with derivatives. We have discovered that it is not exponential but, in fact, cubic. Moreover, simple (though perhaps not obvious) modifications to the implementation by Might et al. lead to an implementation that is not only easy to understand but also highly performant in practice.
We construct the first fully succinct garbling scheme for RAM programs, assuming the existence of indistinguishability obfuscation for circuits and one-way functions. That is, the size, space requirements, and runtime of the garbled program are the same as those of the input program, up to poly-logarithmic factors and a polynomial in the security parameter. The scheme can be used to construct indistinguishability obfuscators for RAM programs with comparable efficiency, at the price of requiring sub-exponential security of the underlying primitives. In particular, this opens the door to obfuscated computations that are sublinear in the length of their inputs. The scheme builds on the recent schemes of Koppula-Lewko-Waters and Canetti-Holmgren-Jain-Vaikuntanathan [STOC 15]. A key technical challenge here is how to combine the fixed-prefix technique of KLW, which was developed for deterministic programs, with randomized Oblivious RAM techniques. To overcome that, we develop a method for arguing about the indistinguishability of two obfuscated randomized programs that use correlated randomness. Along the way, we also define and construct garbling schemes that offer only partial protection. These may be of independent interest.
We present essential concepts of a model-based testing framework for probabilistic systems with continuous time. Markov automata are used as an underlying model. Key result of the work is the solid core of a probabilistic test theory, that incorporates real-time stochastic behaviour. We connect ioco theory and hypothesis testing to infer about trace probabilities. We show that our conformance relation conservatively extends ioco and discuss the meaning of quiescence in the presence of exponentially distributed time delays.
We present simple, practical, and powerful new techniques for garbled circuits. These techniques result in significant concrete and asymptotic improvements over the state of the art, for several natural kinds of computations. For arithmetic circuits over the integers, our construction results in garbled circuits with free addition, weighted threshold gates with cost independent of fan-in, and exponentiation by a fixed exponent with cost independent of the exponent. For boolean circuits, our construction gives an exponential improvement over the state of the art for threshold gates (including AND/OR gates) of high fan-in. Our construction can be efficiently instantiated with practical symmetric-key primitives (e.g., AES), and is proven secure under similar assumptions to that of the Free-XOR garbling scheme (Kolesnikov & Schneider, ICALP 2008). We give an extensive comparison between our scheme and state-of-the-art garbling schemes applied to boolean circuits.
We consider the following natural generalization of Binary Search: in a given undirected, positively weighted graph, one vertex is a target. The algorithm’s task is to identify the target by adaptively querying vertices. In response to querying a node q, the algorithm learns either that q is the target, or is given an edge out of q that lies on a shortest path from q to the target. We study this problem in a general noisy model in which each query independently receives a correct answer with probability p textgreater 1/2 (a known constant), and an (adversarial) incorrect one with probability 1 − p. Our main positive result is that when p = 1 (i.e., all answers are correct), log2 n queries are always sufficient. For general p, we give an (almost information-theoretically optimal) algorithm that uses, in expectation, no more than (1 − δ) logn/1 − H(p) + o(logn) + O(log2 (1/δ)) queries, and identifies the target correctly with probability at leas 1 − δ. Here, H(p) = −(p logp + (1 − p) log(1 − p)) denotes the entropy. The first bound is achieved by the algorithm that iteratively queries a 1-median of the nodes not ruled out yet; the second bound by careful repeated invocations of a multiplicative weights algorithm. Even for p = 1, we show several hardness results for the problem of determining whether a target can be found using K queries. Our upper bound of log2 n implies a quasipolynomial-time algorithm for undirected connected graphs; we show that this is best-possible under the Strong Exponential Time Hypothesis (SETH). Furthermore, for directed graphs, or for undirected graphs with non-uniform node querying costs, the problem is PSPACE-complete. For a semi-adaptive version, in which one may query r nodes each in k rounds, we show membership in Σ2k−1 in the polynomial hierarchy, and hardness for Σ2k−5.
We consider a service system where agents (or, servers) are invited on-demand. Customers arrive as a Poisson process and join a customer queue. Customer service times are i.i.d. exponential. Agents' behavior is random in two respects. First, they can be invited into the system exogenously, and join the agent queue after a random time. Second, with some probability they rejoin the agent queue after a service completion, and otherwise leave the system. The objective is to design a real-time adaptive agent invitation scheme that keeps both customer and agent queues/waiting-times small. We study an adaptive scheme, which controls the number of pending agent invitations, based on queue-state feedback. We study the system process fluid limits, in the asymptotic regime where the customer arrival rate goes to infinity. We use the machinery of switched linear systems and common quadratic Lyapunov functions to derive sufficient conditions for the local stability of fluid limits at the desired equilibrium point (with zero queues). We conjecture that, for our model, local stability is in fact sufficient for global stability of fluid limits; the validity of this conjecture is supported by numerical and simulation experiments. When the local stability conditions do hold, simulations show good overall performance of the scheme.
We consider a continuous analogue of (Babai et al. 1996)'s and (Cai et al. 2000)'s problem of solving multiplicative matrix equations. Given k + 1 square matrices A1, ..., Ak, C, all of the same dimension, whose entries are real algebraic, we examine the problem of deciding whether there exist non-negative reals t1, ..., tk such that We show that this problem is undecidable in general, but decidable under the assumption that the matrices A1, ..., Ak commute. Our results have applications to reachability problems for linear hybrid automata. Our decidability proof relies on a number of theorems from algebraic and transcendental number theory, most notably those of Baker, Kronecker, Lindemann, and Masser, as well as some useful geometric and linear-algebraic results, including the Minkowski-Weyl theorem and a new (to the best of our knowledge) result about the uniqueness of strictly upper triangular matrix logarithms of upper unitriangular matrices. On the other hand, our undecidability result is shown by reduction from Hilbert's Tenth Problem.
We introduce the Nondeterministic Strong Exponential Time Hypothesis (NSETH) as a natural extension of the Strong Exponential Time Hypothesis (SETH). We show that both refuting and proving NSETH would have interesting consequences. In particular we show that disproving NSETH would give new nontrivial circuit lower bounds. On the other hand, NSETH implies non-reducibility results, i.e. the absence of (deterministic) fine-grained reductions from SAT to a number of problems. As a consequence we conclude that unless this hypothesis fails, problems such as 3-SUM, APSP and model checking of a large class of first-order graph properties cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.
The problem of securely outsourcing computation has received widespread attention due to the development of cloud computing and mobile devices. In this paper, we first propose a secure verifiable outsourcing algorithm of single modular exponentiation based on the one-malicious model of two untrusted servers. The outsourcer could detect any failure with probability 1 if one of the servers misbehaves. We also present the other verifiable outsourcing algorithm for multiple modular exponentiations based on the same model. Compared with the state-of-the-art algorithms, the proposed algorithms improve both checkability and efficiency for the outsourcer. Finally, we utilize the proposed algorithms as two subroutines to achieve outsource-secure polynomial evaluation and ciphertext-policy attributed-based encryption (CP-ABE) scheme with verifiable outsourced encryption and decryption.
TLS and SSH are two of the most commonly used protocols for securing Internet traffic. Many of the implementations of these protocols rely on the cryptographic primitives provided in the OpenSSL library. In this work we disclose a vulnerability in OpenSSL, affecting all versions and forks (e.g. LibreSSL and BoringSSL) since roughly October 2005, which renders the implementation of the DSA signature scheme vulnerable to cache-based side-channel attacks. Exploiting the software defect, we demonstrate the first published cache-based key-recovery attack on these protocols: 260 SSH-2 handshakes to extract a 1024/160-bit DSA host key from an OpenSSH server, and 580 TLS 1.2 handshakes to extract a 2048/256-bit DSA key from an stunnel server.
- « first
- ‹ previous
- 1
- 2
- 3