Biblio
This tutorial will present a systematic overview of \$\backslash$em kleptography\: stealing information subliminally from black-box cryptographic implementations; and \$\backslash$em cliptography\: defending mechanisms that clip the power of kleptographic attacks via specification re-designs (without altering the underlying algorithms). Despite the laudatory history of development of modern cryptography, applying cryptographic tools to reliably provide security and privacy in practice is notoriously difficult. One fundamental practical challenge, guaranteeing security and privacy without explicit trust in the algorithms and implementations that underlie basic security infrastructure, remains. While the dangers of entertaining adversarial implementation of cryptographic primitives seem obvious, the ramifications of such attacks are surprisingly dire: it turns out that – in wide generality – adversarial implementations of cryptographic (both deterministic and randomized) algorithms may leak private information while producing output that is statistically indistinguishable from that of a faithful implementation. Such attacks were formally studied in Kleptography. Snowden revelations has shown us how security and privacy can be lost at a very large scale even when traditional cryptography seems to be used to protect Internet communication, when Kleptography was not taken into consideration. We will first explain how the above-mentioned Kleptographic attacks can be carried out in various settings. We will then introduce several simple but rigorous immunizing strategies that were inspired by folklore practical wisdoms to protect different algorithms from implementation subversion. Those strategies can be applied to ensure security of most of the fundamental cryptographic primitives such as PRG, digital signatures, public key encryptions against kleptographic attacks when they are implemented accordingly. Our new design principles may suggest new standardization methods that help reducing the threats of subverted implementation. We also hope our tutorial to stimulate a community-wise efforts to further tackle the fundamental challenge mentioned at the beginning.
Notable recent security incidents have generated intense interest in adversaries which attempt to subvert–-perhaps covertly–-crypto$\backslash$-graphic algorithms. In this paper we develop (IND-CPA) Semantically Secure encryption in this challenging setting. This fundamental encryption primitive has been previously studied in the "kleptographic setting," though existing results must relax the model by introducing trusted components or otherwise constraining the subversion power of the adversary: designing a Public Key System that is kletographically semantically secure (with minimal trust) has remained elusive to date. In this work, we finally achieve such systems, even when all relevant cryptographic algorithms are subject to adversarial (kleptographic) subversion. To this end we exploit novel inter-component randomized cryptographic checking techniques (with an offline checking component), combined with common and simple software engineering modular programming techniques (applied to the system's black box specification level). Moreover, our methodology yields a strong generic technique for the preservation of any semantically secure cryptosystem when incorporated into the strong kleptographic adversary setting.
In the age of IOT, as more and more devices are getting connected to the internet through wireless networks, a better security infrastructure is required to protect these devices from massive attacks. For long SSIDs and passwords have been used to authenticate and secure Wi-Fi networks. But the SSID and password combination is vulnerable to security exploits like phishing and brute-forcing. In this paper, a completely automated Wi-Fi authentication system is proposed, that generates Time-based One-Time Passwords (TOTP) to secure Wi-Fi networks. This approach aims to black box the process of connecting to a Wi-Fi network for the user and the process of generating periodic secure passwords for the network without human intervention.
In this paper, a practical quantum public-key encryption model is proposed by studying the recent quantum public-key encryption. This proposed model makes explicit stipulations on the generation, distribution, authentication, and usage of the secret keys, thus forms a black-box operation. Meanwhile, this proposed model encapsulates the process of encryption and decryption for the users, and forms a blackbox client-side. In our models, each module is independent and can be replaced arbitrarily without affecting the proposed model. Therefore, this model has a good guiding significance for the design and development of the quantum public key encryption schemes.
In this paper, we initiate the study of garbled protocols - a generalization of Yao's garbled circuits construction to distributed protocols. More specifically, in a garbled protocol construction, each party can independently generate a garbled protocol component along with pairs of input labels. Additionally, it generates an encoding of its input. The evaluation procedure takes as input the set of all garbled protocol components and the labels corresponding to the input encodings of all parties and outputs the entire transcript of the distributed protocol. We provide constructions for garbling arbitrary protocols based on standard computational assumptions on bilinear maps (in the common random string model). Next, using garbled protocols we obtain a general compiler that compresses any arbitrary round multiparty secure computation protocol into a two-round UC secure protocol. Previously, two-round multiparty secure computation protocols were only known assuming witness encryption or learning-with errors. Benefiting from our generic approach we also obtain protocols (i) for the setting of random access machines (RAM programs) while keeping communication and computational costs proportional to running times, while (ii) making only a black-box use of the underlying group, eliminating the need for any expensive non-black-box group operations. Our results are obtained by a simple but powerful extension of the non-interactive zero-knowledge proof system of Groth, Ostrovsky and Sahai [Journal of ACM, 2012].
We provide an analysis of IEEE standard P1735, which describes methods for encrypting electronic-design intellectual property (IP), as well as the management of access rights for such IP. We find a surprising number of cryptographic mistakes in the standard. In the most egregious cases, these mistakes enable attack vectors that allow us to recover the entire underlying plaintext IP. Some of these attack vectors are well-known, e.g. padding-oracle attacks. Others are new, and are made possible by the need to support the typical uses of the underlying IP; in particular, the need for commercial system-on-chip (SoC) tools to synthesize multiple pieces of IP into a fully specified chip design and to provide syntax errors. We exploit these mistakes in a variety of ways, leveraging a commercial SoC tool as a black-box oracle. In addition to being able to recover entire plaintext IP, we show how to produce standard-compliant ciphertexts of IP that have been modified to include targeted hardware Trojans. For example, IP that correctly implements the AES block cipher on all but one (arbitrary) plaintext that induces the block cipher to return the secret key. We outline a number of other attacks that the standard allows, including on the cryptographic mechanism for IP licensing. Unfortunately, we show that obvious "quick fixes" to the standard (and the tools that support it) do not stop all of our attacks. This suggests that the standard requires a significant overhaul, and that IP-authors using P1735 encryption should consider themselves at risk.
In order to provide secure data communication in present cyber space world, a stronger encryption technique becomes a necessity that can help people to protect their sensitive information from cryptanalyst. This paper proposes a novel symmetric block cipher algorithm that uses multiple access circular queues (MACQs) of variable lengths for diffusion of information to a greater extent. The keys are randomly generated and will be of variable lengths depending upon the size of each MACQ.A number of iterations of circular rotations, swapping of elements and XORing the key with queue elements are performed on each MACQ. S-box is used so that the relationship between the key and the cipher text remains indeterminate or obscure. These operations together will help in transforming the cipher into a much more complex and secure block cipher. This paper attempt to propose an encryption algorithm that is secure and fast.
Today, cloud vendors host third party black-box services, whose developers usually provide only textual descriptions or purely syntactical interface specifications. Cloud vendors that give substantial support to other third party developers to integrate hosted services into new software solutions would have a unique selling feature over their competitors. However, to reliably determine if a service is reusable, comprehensive service specifications are needed. Characteristic for comprehensive in contrast to syntactical specifications are the formalization of ontological and behavioral semantics, homogeneity according to a global ontology, and a service grounding that links the abstract service description and its technical realization. Homogeneous, semantical specifications enable to reliably identify reusable services, whereas the service grounding is needed for the technical service integration. In general, comprehensive specifications are not available and have to be derived. Existing automatized approaches are restricted to certain characteristics of comprehensiveness. In my PhD, I consider an automatized approach to derive fully-fledged comprehensive specifications for black-box services. Ontological semantics are derived from syntactical interface specifications. Behavioral semantics are mined from call logs that cloud vendors create to monitor the hosted services. The specifications are harmonized over a global ontology. The service grounding is established using traceability information. The approach enables third party developers to compose services into complex systems and creates new sales channels for cloud and service providers.
Finding differences between programs with similar functionality is an important security problem as such differences can be used for fingerprinting or creating evasion attacks against security software like Web Application Firewalls (WAFs) which are designed to detect malicious inputs to web applications. In this paper, we present SFADIFF, a black-box differential testing framework based on Symbolic Finite Automata (SFA) learning. SFADIFF can automatically find differences between a set of programs with comparable functionality. Unlike existing differential testing techniques, instead of searching for each difference individually, SFADIFF infers SFA models of the target programs using black-box queries and systematically enumerates the differences between the inferred SFA models. All differences between the inferred models are checked against the corresponding programs. Any difference between the models, that does not result in a difference between the corresponding programs, is used as a counterexample for further refinement of the inferred models. SFADIFF's model-based approach, unlike existing differential testing tools, also support fully automated root cause analysis in a domain-independent manner. We evaluate SFADIFF in three different settings for finding discrepancies between: (i) three TCP implementations, (ii) four WAFs, and (iii) HTML/JavaScript parsing implementations in WAFs and web browsers. Our results demonstrate that SFADIFF is able to identify and enumerate the differences systematically and efficiently in all these settings. We show that SFADIFF is able to find differences not only between different WAFs but also between different versions of the same WAF. SFADIFF is also able to discover three previously-unknown differences between the HTML/JavaScript parsers of two popular WAFs (PHPIDS 0.7 and Expose 2.4.0) and the corresponding parsers of Google Chrome, Firefox, Safari, and Internet Explorer. We confirm that all these differences can be used to evade the WAFs and launch successful cross-site scripting attacks.
Processes to automate the selection of appropriate algorithms for various matrix computations are described. In particular, processes to check for, and certify, various matrix properties of black-box matrices are presented. These include sparsity patterns and structural properties that allow "superfast" algorithms to be used in place of black-box algorithms. Matrix properties that hold generically, and allow the use of matrix preconditioning to be reduced or eliminated, can also be checked for and certified –- notably including in the small-field case, where this presently has the greatest impact on the efficiency of the computation.
Tensors are a multi-linear generalization of matrices to their d-way counterparts, and are receiving intense interest recently due to their natural representation of high-dimensional data and the availability of fast tensor decomposition algorithms. Given the input-output data of a nonlinear system/circuit, this paper presents a nonlinear model identification and simulation framework built on top of Volterra series and its seamless integration with tensor arithmetic. By exploiting partially-symmetric polyadic decompositions of sparse Toeplitz tensors, the proposed framework permits a pleasantly scalable way to incorporate high-order Volterra kernels. Such an approach largely eludes the curse of dimensionality and allows computationally fast modeling and simulation beyond weakly nonlinear systems. The black-box nature of the model also hides structural information of the system/circuit and encapsulates it in terms of compact tensors. Numerical examples are given to verify the efficacy, efficiency and generality of this tensor-based modeling and simulation framework.
One important goal of black-box complexity theory is the development of complexity models allowing to derive meaningful lower bounds for whole classes of randomized search heuristics. Complementing classical runtime analysis, black-box models help us understand how algorithmic choices such as the population size, the variation operators, or the selection rules influence the optimization time. One example for such a result is the Ω(n log n) lower bound for unary unbiased algorithms on functions with a unique global optimum [Lehre/Witt, GECCO 2010], which tells us that higher arity operators or biased sampling strategies are needed when trying to beat this bound. In lack of analyzing techniques, almost no non-trivial bounds are known for other restricted models. Proving such bounds therefore remains to be one of the main challenges in black-box complexity theory. With this paper we contribute to our technical toolbox for lower bound computations by proposing a new type of information-theoretic argument. We regard the permutation- and bit-invariant version of LeadingOnes and prove that its (1+1) elitist black-box complexity is Ω(n2), a bound that is matched by (1+1)-type evolutionary algorithms. The (1+1) elitist complexity of LeadingOnes is thus considerably larger than its unrestricted one, which is known to be of order n log log n [Afshani et al., 2013].
While compilers offer a fair trade-off between productivity and executable performance in single-threaded execution, their optimizations remain fragile when addressing compute-intensive code for parallel architectures with deep memory hierarchies. Moreover, these optimizations operate as black boxes, impenetrable for the user, leaving them with no alternative to time-consuming and error-prone manual optimization in cases where an imprecise cost model or a weak analysis resulted in a bad optimization decision. To address this issue, we propose a technique allowing to automatically translate an arbitrary polyhedral optimization, used internally by loop-level optimization frameworks of several modern compilers, into a sequence of comprehensible syntactic transformations as long as this optimization focuses on scheduling loop iterations. This approach opens the black box of the polyhedral frameworks enabling users to examine, refine, replay and even design complex optimizations semi-automatically in partnership with the compiler.
Synchronous replication is critical for today's enterprise IT organization. It is mandatory by regulation in several countries for some types of organizations, including banks and insurance companies. The technology has been available for a long period of time, but due to speed of light and maximal latency limitations, it is usually limited to a distance of 50-100 miles. Flight data recorders, also known as black boxes, have long been used to record the last actions which happened in airplanes at times of disasters. We present an integration between an Enterprise Data Recorder and an asynchronous replication mechanism, which allows breaking the functional limits that light speed imposes on synchronous replication.
In classical runtime analysis it has been observed that certain working principles of an evolutionary algorithm cannot be understood by only looking at the asymptotic order of the runtime, but that more precise estimates are needed. In this work we demonstrate that the same observation applies to black-box complexity analysis. We prove that the unary unbiased black-box complexity of the classic OneMax function class is n ln(n) – cn ± o(n) for a constant c between 0.2539 and 0.2665. Our analysis yields a simple (1+1)-type algorithm achieving this runtime bound via a fitness-dependent mutation strength. When translated into a fixed-budget perspective, our algorithm with the same budget computes a solution that asymptotically is 13% closer to the optimum (given that the budget is at least 0.2675n).
Finding and proving lower bounds on black-box complexities is one of the hardest problems in theory of randomized search heuristics. Until recently, there were no general ways of doing this, except for information theoretic arguments similar to the one of Droste, Jansen and Wegener. In a recent paper by Buzdalov, Kever and Doerr, a theorem is proven which may yield tighter bounds on unrestricted black-box complexity using certain problem-specific information. To use this theorem, one should split the search process into a finite number of states, describe transitions between states, and for each state specify (and prove) the maximum number of different answers to any query. We augment these state constraints by one more kind of constraints on states, namely, the maximum number of different currently possible optima. An algorithm is presented for computing the lower bounds based on these constraints. We also empirically show improved lower bounds on black-box complexity of OneMax and Mastermind.