Biblio
Fuzzy extractors (Dodiset al., Eurocrypt 2004) turn a noisy secret into a stable, uniformly distributed key. Reusable fuzzy extractors remain secure when multiple keys are produced from a single noisy secret (Boyen, CCS 2004). Boyen showed information-theoretically secure reusable fuzzy extractors are subject to strong limitations. Simoens et al. (IEEE S&P, 2009) then showed deployed constructions suffer severe security breaks when reused. Canetti et al. (Eurocrypt 2016) used computational security to sidestep this problem, building a computationally secure reusable fuzzy extractor that corrects a sublinear fraction of errors. We introduce a generic approach to constructing reusable fuzzy extractors. We define a new primitive called a reusable pseudoentropic isometry that projects an input metric space to an output metric space. This projection preserves distance and entropy even if the same input is mapped to multiple output metric spaces. A reusable pseudoentropy isometry yields a reusable fuzzy extractor by 1) randomizing the noisy secret using the isometry and 2) applying a traditional fuzzy extractor to derive a secret key. We propose reusable pseudoentropic isometries for the set difference and Hamming metrics. The set difference construction is built from composable digital lockers (Canetti and Dakdouk, Eurocrypt 2008). For the Hamming metric, we show that the second construction of Canetti et al.(Eurocrypt 2016) can be seen as an instantiation of our framework. In both cases, the pseudoentropic isometry's reusability requires noisy secrets distributions to have entropy in each symbol of the alphabet. Our constructions yield the first reusable fuzzy extractors that correct a constant fraction of errors. We also implement our set difference solution and describe two use cases.
Post-quantum secure communication has attracted much interest in recent years. Known computationally secure post-quantum key agreement protocols are resource intensive for small devices. These devices may need to securely send frequent short messages, for example to report the measurement of a sensor. Secure communication using physical assumptions provides information-theoretic security (and so quantum-safe) with small computational over-head. Security and efficiency analysis of these systems however is asymptotic. In this poster we consider two secure message communication systems, and derive and compare their security and efficiency for finite length messages. Our results show that these systems indeed provide an attractive alternative for post-quantum security.
Safety-critical system engineering and traditional safety analyses have for decades been focused on problems caused by natural or accidental phenomena. Security analyses, on the other hand, focus on preventing intentional, malicious acts that reduce system availability, degrade user privacy, or enable unauthorized access. In the context of safety-critical systems, safety and security are intertwined, e.g., injecting malicious control commands may lead to system actuation that causes harm. Despite this intertwining, safety and security concerns have traditionally been designed and analyzed independently of one another, and examined in very different ways. In this work we examine a new hazard analysis technique—Systematic Analysis of Faults and Errors (SAFE)—and its deep integration of safety and security concerns. This is achieved by explicitly incorporating a semantic framework of error "effects" that unifies an adversary model long used in security contexts with a fault/error categorization that aligns with previous approaches to hazard analysis. This categorization enables analysts to separate the immediate, component-level effects of errors from their cause or precise deviation from specification. This paper details SAFE's integrated handling of safety and security through a) a methodology grounded in—and adaptable to—different approaches from the literature, b) explicit documentation of system assumptions which are implicit in other analyses, and c) increasing the tractability of analyzing modern, complex, component-based software-driven systems. We then discuss how SAFE's approach supports the long-term goals of of increased compositionality and formalization of safety/security analysis.
In recent works, numerous physical-layer security systems have been proposed as alternatives to classic cryptography. Such systems aim to use the intrinsic properties of radio signals and the wireless medium to provide confidentiality and authentication to wireless devices. However, fundamental vulnerabilities are often discovered in these systems shortly after their inception. We therefore challenge the assumptions made by existing physical-layer security systems, and postulate that weaker assumptions are needed in order to adapt for practical scenarios. We also argue that if no computational advantage over an adversary can be ensured, secure communication cannot be realistically achieved.
The vision of smart environments, systems, and services is driven by the development of the Internet of Things (IoT). IoT devices produce large amounts of data and this data is used to make critical decisions in many systems. The data produced by these devices has to satisfy various security related requirements in order to be useful in practical scenarios. One of these requirements is data provenance which allows a user to trust the data regarding its origin and location. The low cost of many IoT devices and the fact that they may be deployed in unprotected spaces requires security protocols to be efficient and secure against physical attacks. This paper proposes a light-weight protocol for data provenance in the IoT. The proposed protocol uses physical unclonable functions (PUFs) to provide physical security and uniquely identify an IoT device. Moreover, wireless channel characteristics are used to uniquely identify a wireless link between an IoT device and a server/user. A brief security and performance analysis are presented to give a preliminary validation of the protocol.
Cyber risk management largely reduces to a race for information between defenders of ICT systems and attackers. Defenders can gain advantage in this race by sharing cyber risk information with each other. Yet, they often exchange less information than is socially desirable, because sharing decisions are guided by selfish rather than altruistic reasons. A growing line of research studies these strategic aspects that drive defenders’ sharing decisions. The present survey systematizes these works in a novel framework. It provides a consolidated understanding of defenders’ strategies to privately or publicly share information and enables us to distill trends in the literature and identify future research directions. We reveal that many theoretical works assume cyber risk information sharing to be beneficial, while empirical validations are often missing.
In this paper, a novel secure information exchange scheme has been proposed for MIMO vehicular ad hoc networks (VANETs) through physical layer approach. In the scheme, a group of On Board Units (OBUs) exchange information with help of one Road Side Unit (RSU). By utilizing the key signal processing technique, i.e., Direction Rotation Alignment technique, the information to be exchanged of the two neighbor OBUs are aligned into a same direction to form summed signal at RSU or external eavesdroppers. With such summed signal, the RSU or the eavesdropper cannot recover the individual information from the OBUs. By regulating the transmission rate for each OBU, the information theoretic security could be achieved. The secrecy sum-rates of the proposed scheme are analyzed following the scheme. Finally, the numerical results are conducted to demonstrate the theoretical analysis.
A wireless sensor network (WSN) is composed of sensor nodes and a base station. In WSNs, constructing an efficient key-sharing scheme to ensure a secure communication is important. In this paper, we propose a new key-sharing scheme for groups, which shares a group key in a single broadcast without being dependent on the number of nodes. This scheme is based on geometric characteristics and has information-theoretic security in the analysis of transmitted data. We compared our scheme with conventional schemes in terms of communication traffic, computational complexity, flexibility, and security, and the results showed that our scheme is suitable for an Internet-of-Things (IoT) network.
In Wyner wiretap II model of communication, Alice and Bob are connected by a channel that can be eavesdropped by an adversary with unlimited computation who can select a fraction of communication to view, and the goal is to provide perfect information theoretic security. Information theoretic security is increasingly important because of the threat of quantum computers that can effectively break algorithms and protocols that are used in today's public key infrastructure. We consider interactive protocols for wiretap II channel with active adversary who can eavesdrop and add adversarial noise to the eavesdropped part of the codeword. These channels capture wireless setting where malicious eavesdroppers at reception distance of the transmitter can eavesdrop the communication and introduce jamming signal to the channel. We derive a new upperbound R ≤ 1 - ρ for the rate of interactive protocols over two-way wiretap II channel with active adversaries, and construct a perfectly secure protocol family with achievable rate 1 - 2ρ + ρ2. This is strictly higher than the rate of the best one round protocol which is 1 - 2ρ, hence showing that interaction improves rate. We also prove that even with interaction, reliable communication is possible only if ρ \textbackslashtextless; 1/2. An interesting aspect of this work is that our bounds will also hold in network setting when two nodes are connected by n paths, a ρ of which is corrupted by the adversary. We discuss our results, give their relations to the other works, and propose directions for future work.
In fiber-optic communication networks, research on data security at lower layers of the protocol stack and in particular at the physical layer by means of information-theoretic concepts is only in the beginning. Nevertheless, it has recently attracted quite some attention as it holds the promise of providing unconditional, perfect security without the need for secret key exchanges. In this paper, we analyze some important constraints that such concepts put on a potential implementation of physical-layer security. We review the fundamentals of physical-layer security on the basis of the commonly used AWGN wiretap channel model. For such channel model we summarize the security metrics which are typically used in information theory and in particular recall that, for secure communication over the AWGN channel, the legitimate receiver needs an SNR advantage over the eavesdropper. Next, we relate the information theoretic metrics to physically measurable quantities in optical communications engineering, namely optical signal-to-noise ratio (OSNR) and bit-error ratio (BER), and translate the information-theoretic wiretap scenario to a simple real-world point-to-point optical transmission link in which part of the light is wiretapped using a bend coupler. We investigate the achievable OSNR advantage under realistic assumptions for fiber loss, tap ratio, and noise budget and find that secure transmission is limited to a distance of a few tens of kilometers in this case. The maximum secure transmission distance decreases with an increasing tap ratio chosen by the eavesdropper. This can be only counteracted by monitoring the link loss towards the legitimate receiver which would force the eavesdropper to choose small tap ratios in order to remain undetected. In an outlook towards further research directions we identify information-theoretic approaches which could potentially allow to realize physical-layer security in more generalized scenarios or over longer distances.
We report on our research on proving the security of multi-party cryptographic protocols using the EASYCRYPT proof assistant. We work in the computational model using the sequence of games approach, and define honest-butcurious (semi-honest) security using a variation of the real/ideal paradigm in which, for each protocol party, an adversary chooses protocol inputs in an attempt to distinguish the party's real and ideal games. Our proofs are information-theoretic, instead of being based on complexity theory and computational assumptions. We employ oracles (e.g., random oracles for hashing) whose encapsulated states depend on dynamically-made, nonprogrammable random choices. By limiting an adversary's oracle use, one may obtain concrete upper bounds on the distances between a party's real and ideal games that are expressed in terms of game parameters. Furthermore, our proofs work for adaptive adversaries, ones that, when choosing the value of a protocol input, may condition this choice on their current protocol view and oracle knowledge. We provide an analysis in EASYCRYPT of a three party private count retrieval protocol. We emphasize the lessons learned from completing this proof.
In threshold schemes one represents each sensitive variable by a number n of shares such that their (usually) bitwise sum equals that variable. These shares are initially generated in such a way that any subset of n-1 shares gives no information about the sensitive variable. Functions (S-boxes, mixing layers, round functions, etc.) are computed on the shares of the inputs resulting in the output as a number of shares. An essential property of a threshold implementation of a function is that each output share is computed from at most n-1 input shares. This is called incompleteness and guarantees that that computation cannot leak information about sensitive variables. The resulting output is then typically subject to some further computation, again in the form of separate, incomplete, computation on shares. For these subsequent computations to not leak information about the sensitive variables, the output of the previous stage must still be uniform. Hence, in an iterative cryptographic primitive such as a block cipher, we need a threshold implementation of the round function that yields a uniformly shared output if its input is uniformly shared. This property of the threshold implementation is called uniformity. Threshold schemes form a good protection mechanism against differential power analysis (DPA). In particular, using it allows building cryptographic hardware that is guaranteed to be unattackable with first-order DPA, assuming certain leakage models of the cryptographic hardware at hand and for a plausible definition of "first order". Constructing an incomplete threshold implementation of a non-linear function is rather straightforward. To offer resistance against first-order DPA, the number of shares equals the algebraic degree of the function plus one. However, constructing one that is at the same time incomplete and uniform may present a challenge. For instance, for the Keccak non-linear layer, incomplete 3-share threshold implementations are easy to generate but no uniform one is known. Exhaustive investigations have been performed on all small S-boxes (3 to 5 bits) and there are many S-boxes for which it is not known to build uniform threshold implementations with d+1 shares if their algebraic degree is d. Uniformity of a threshold implementation is essential in its information-theoretical proof of resistance against first-order DPA. However, given a non-uniform threshold implementation, it is not immediate how to exploit its non-uniformity in an attack. In my talk I discuss the local and global effects of non-uniformity in iterated functions and their significance on the resistance against DPA. I treat methods to quantitatively limit the amount of non-uniformity and to keep it away from where it may be harmful. These techniques are relatively cheap and can reduce non-uniformity to such a low level that it would require an astronomical amount of samples to measure it.
In order to retrieve the secret key in a side-channel attack, the attacker computes distinguisher values using all the available data. A profiling stage is very useful to provide some a priori information about the leakage model. However, profiling is essentially empirical and may not be exhaustive. Therefore, during the attack, the attacker may come up on previously unseen data, which can be troublesome. A lazy workaround is to ignore all such novel observations altogether. In this paper, we show that this is not optimal and can be avoided. Our proposed techniques eventually improve the performance of classical information-theoretic distinguishers in terms of success rate.
A private information retrieval (abbreviated as PIR) protocol deals with the schemes that allow a user to retrieve privately an element of a non-replicated database. The security of PIR protocol is that the user wants to retrieve information in a database without the database knowing which information has being retrieved. This is widely applied in medical files, video or songs databases or even stock exchanges share prices. At ISIT 2008, Carlos Aguilar Melchor and Philippe Gaborit presented a lattice-based PIR protocol, whose security based on problems close to coding theory problems known to be NP-complete. In this paper, we present a practical attack on this PIR protocol when the number of elements in the database is not big. More specifically, we can firstly uncover the hidden linear relationship between the public matrices and noisy matrices, and then propose an efficient dimension-reduced attack to locate the index of the element which the user retrieved.
Cloud computing platforms are becoming increasingly prevalent and readily available nowadays, providing us alternative and economic services for resource-constrained clients to perform large-scale computation. In this work, we address the problem of secure outsourcing of large-scale nonnegative matrix factorization (NMF) to a cloud in a way that the client can verify the correctness of results with small overhead. The input matrix protection is achieved by a lightweight, permutation-based encryption mechanism. By exploiting the iterative nature of NMF computation, we propose a single-round verification strategy, which can be proved to be effective. Both theoretical and experimental results are given to demonstrate the superior performance of our scheme.
Pseudo-random number generators (PRNGs) are a critical infrastructure for cryptography and security of many computer applications. At the same time, PRNGs are surprisingly difficult to design, implement, and debug. This paper presents the first static analysis technique specifically for quality assurance of cryptographic PRNG implementations. The analysis targets a particular kind of implementation defect, the entropy loss. Entropy loss occurs when the entropy contained in the PRNG seed is not utilized to the full extent for generating the pseudo-random output stream. The Debian OpenSSL disaster, probably the most prominent PRNG-related security incident, was one but not the only manifestation of such a defect. Together with the static analysis technique, we present its implementation, a tool named Entroposcope. The tool offers a high degree of automation and practicality. We have applied the tool to five real-world PRNGs of different designs and show that it effectively detects both known and previously unknown instances of entropy loss.
We consider the task of secure multi-party computation of arithmetic circuits over a finite field. Unlike Boolean circuits, arithmetic circuits allow natural computations on integers to be expressed easily and efficiently. In the strongest setting of malicious security with a dishonest majority –- where any number of parties may deviate arbitrarily from the protocol –- most existing protocols require expensive public-key cryptography for each multiplication in the preprocessing stage of the protocol, which leads to a high total cost. We present a new protocol that overcomes this limitation by using oblivious transfer to perform secure multiplications in general finite fields with reduced communication and computation. Our protocol is based on an arithmetic view of oblivious transfer, with careful consistency checks and other techniques to obtain malicious security at a cost of less than 6 times that of semi-honest security. We describe a highly optimized implementation together with experimental results for up to five parties. By making extensive use of parallelism and SSE instructions, we improve upon previous runtimes for MPC over arithmetic circuits by more than 200 times.
We present a unified framework for studying secure multiparty computation (MPC) with arbitrarily restricted interaction patterns such as a chain, a star, a directed tree, or a directed graph. Our study generalizes both standard MPC and recent models for MPC with specific restricted interaction patterns, such as those studied by Halevi et al. (Crypto 2011), Goldwasser et al. (Eurocrypt 2014), and Beimel et al. (Crypto 2014). Since restricted interaction patterns cannot always yield full security for MPC, we start by formalizing the notion of "best possible security" for any interaction pattern. We then obtain the following main results: Completeness theorem. We prove that the star interaction pattern is complete for the problem of MPC with general interaction patterns. Positive results. We present both information-theoretic and computationally secure protocols for computing arbitrary functions with general interaction patterns. We also present more efficient protocols for computing symmetric functions, both in the computational and in the information-theoretic setting. Our computationally secure protocols for general functions necessarily rely on indistinguishability obfuscation while the ones for computing symmetric functions make simple use of multilinear maps. Negative results. We show that, in many cases, the complexity of our information-theoretic protocols is essentially the best that can be achieved. All of our protocols rely on a correlated randomness setup, which is necessary in our setting (for computing general functions). In the computational case, we also present a generic procedure to make any correlated randomness setup reusable, in the common random string model. Although most of our information-theoretic protocols have exponential complexity, they may be practical for functions on small domains (e.g., f0; 1g20), where they are concretely faster than their computational counterparts.
In this work we propose a model for conducting efficient and mutually beneficial information sharing between two competing entities, focusing specifically on software vulnerability sharing. We extend the two-stage game-theoretic model proposed by Khouzani et al. [18] for bug sharing, addressing two key features: we allow security information to be associated with different categories and severities, but also remove a large proportion of player homogeneity assumptions the previous work makes. We then analyse how these added degrees of realism affect the trading dynamics of the game. Secondly, we develop a new private set operation (PSO) protocol that enables the removal of the trusted mediation requirement. The PSO functionality allows for bilateral trading between the two entities up to a mutually agreed threshold on the value of information shared, keeping all other input information secret. The protocol scales linearly with set sizes and we give an implementation that establishes the practicality of the design for varying input parameters. The resulting model and protocol provide a framework for practical and secure information sharing between competing entities.
Radio network information is leaked well beyond the perimeter in which the radio network is deployed. We investigate attacks where person location can be inferred using the radio characteristics of wireless links (e.g., the received signal strength). An attacker can deploy a network of receivers which measure the received signal strength of the radio signals transmitted by the legitimate wireless devices inside a perimeter, allowing the attacker to learn the locations of people moving in the vicinity of the devices inside the perimeter. In this paper, we develop the first solution to this location privacy problem where neither the attacker nodes nor the tracked moving object transmit any RF signals. We first model the radio network leakage attack using a Stackelberg game. Next, we define utility and cost functions related to the defender and attacker actions. Last, using our utility and cost functions, we find the optimal strategy for the defender by applying a greedy method. We evaluate our game theoretic framework using experiments and find that our approach significantly reduces the chance of an attacker determining the location of people inside a perimeter.
We give attacks on Feistel-based format-preserving encryption (FPE) schemes that succeed in message recovery (not merely distinguishing scheme outputs from random) when the message space is small. For \$4\$-bit messages, the attacks fully recover the target message using \$2textasciicircum1 examples for the FF3 NIST standard and \$2textasciicircum5 examples for the FF1 NIST standard. The examples include only three messages per tweak, which is what makes the attacks non-trivial even though the total number of examples exceeds the size of the domain. The attacks are rigorously analyzed in a new definitional framework of message-recovery security. The attacks are easily put out of reach by increasing the number of Feistel rounds in the standards.
- « first
- ‹ previous
- 1
- 2
- 3