Visible to the public Biblio

Filters: Keyword is theoretical cryptography  [Clear All Filters]
2020-03-04
Wiese, Moritz, Boche, Holger.  2019.  A Graph-Based Modular Coding Scheme Which Achieves Semantic Security. 2019 IEEE International Symposium on Information Theory (ISIT). :822–826.

It is investigated how to achieve semantic security for the wiretap channel. A new type of functions called biregular irreducible (BRI) functions, similar to universal hash functions, is introduced. BRI functions provide a universal method of establishing secrecy. It is proved that the known secrecy rates of any discrete and Gaussian wiretap channel are achievable with semantic security by modular wiretap codes constructed from a BRI function and an error-correcting code. A characterization of BRI functions in terms of edge-disjoint biregular graphs on a common vertex set is derived. This is used to study examples of BRI functions and to construct new ones.

Yi, Zhuo, Du, Xuehui, Liao, Ying, Lu, Xin.  2019.  An Access Authentication Algorithm Based on a Hierarchical Identity-Based Signature over Lattice for the Space-Ground Integrated Network. 2019 International Conference on Advanced Communication Technologies and Networking (CommNet). :1–9.

Access authentication is a key technology to identify the legitimacy of mobile users when accessing the space-ground integrated networks (SGIN). A hierarchical identity-based signature over lattice (L-HIBS) based mobile access authentication mechanism is proposed to settle the insufficiencies of existing access authentication methods in SGIN such as high computational complexity, large authentication delay and no-resistance to quantum attack. Firstly, the idea of hierarchical identity-based cryptography is introduced according to hierarchical distribution of nodes in SGIN, and a hierarchical access authentication architecture is built. Secondly, a new L-HIBS scheme is constructed based on the Small Integer Solution (SIS) problem to support the hierarchical identity-based cryptography. Thirdly, a mobile access authentication protocol that supports bidirectional authentication and shared session key exchange is designed with the aforementioned L-HIBS scheme. Results of theoretical analysis and simulation experiments suggest that the L-HIBS scheme possesses strong unforgeability of selecting identity and adaptive selection messages under the standard security model, and the authentication protocol has smaller computational overhead and shorter private keys and shorter signature compared to given baseline protocols.

Korzhik, Valery, Starostin, Vladimir, Morales-Luna, Guillermo, Kabardov, Muaed, Gerasimovich, Aleksandr, Yakovlev, Victor, Zhuvikin, Aleksey.  2019.  Information Theoretical Secure Key Sharing Protocol for Noiseless Public Constant Parameter Channels without Cryptographic Assumptions. 2019 Federated Conference on Computer Science and Information Systems (FedCSIS). :327–332.

We propose a new key sharing protocol executed through any constant parameter noiseless public channel (as Internet itself) without any cryptographic assumptions and protocol restrictions on SNR in the eavesdropper channels. This protocol is based on extraction by legitimate users of eigenvalues from randomly generated matrices. A similar protocol was proposed recently by G. Qin and Z. Ding. But we prove that, in fact, this protocol is insecure and we modify it to be both reliable and secure using artificial noise and privacy amplification procedure. Results of simulation prove these statements.

Shahsavari, Yahya, Zhang, Kaiwen, Talhi, Chamseddine.  2019.  A Theoretical Model for Fork Analysis in the Bitcoin Network. 2019 IEEE International Conference on Blockchain (Blockchain). :237–244.

Blockchain networks which employ Proof-of-Work in their consensus mechanism may face inconsistencies in the form of forks. These forks are usually resolved through the application of block selection rules (such as the Nakamoto consensus). In this paper, we investigate the cause and length of forks for the Bitcoin network. We develop theoretical formulas which model the Bitcoin consensus and network protocols, based on an Erdös-Rényi random graph construction of the overlay network of peers. Our theoretical model addresses the effect of key parameters on the fork occurrence probability, such as block propagation delay, network bandwidth, and block size. We also leverage this model to estimate the weight of fork branches. Our model is implemented using the network simulator OMNET++ and validated by historical Bitcoin data. We show that under current conditions, Bitcoin will not benefit from increasing the number of connections per node.

AL-Mubayedh, Dhoha, AL-Khalis, Mashael, AL-Azman, Ghadeer, AL-Abdali, Manal, Al Fosail, Malak, Nagy, Naya.  2019.  Quantum Cryptography on IBM QX. 2019 2nd International Conference on Computer Applications Information Security (ICCAIS). :1–6.

Due to the importance of securing electronic transactions, many cryptographic protocols have been employed, that mainly depend on distributed keys between the intended parties. In classical computers, the security of these protocols depends on the mathematical complexity of the encoding functions and on the length of the key. However, the existing classical algorithms 100% breakable with enough computational power, which can be provided by quantum machines. Moving to quantum computation, the field of security shifts into a new area of cryptographic solutions which is now the field of quantum cryptography. The era of quantum computers is at its beginning. There are few practical implementations and evaluations of quantum protocols. Therefore, the paper defines a well-known quantum key distribution protocol which is BB84 then provides a practical implementation of it on IBM QX software. The practical implementations showed that there were differences between BB84 theoretical expected results and the practical implementation results. Due to this, the paper provides a statistical analysis of the experiments by comparing the standard deviation of the results. Using the BB84 protocol the existence of a third-party eavesdropper can be detected. Thus, calculations of the probability of detecting/not detecting a third-party eavesdropping have been provided. These values are again compared to the theoretical expectation. The calculations showed that with the greater number of qubits, the percentage of detecting eavesdropper will be higher.

2019-02-14
Deng, Dong, Tao, Yufei, Li, Guoliang.  2018.  Overlap Set Similarity Joins with Theoretical Guarantees. Proceedings of the 2018 International Conference on Management of Data. :905-920.
This paper studies the set similarity join problem with overlap constraints which, given two collections of sets and a constant c, finds all the set pairs in the datasets that share at least c common elements. This is a fundamental operation in many fields, such as information retrieval, data mining, and machine learning. The time complexity of all existing methods is O(n2) where n is the total size of all the sets. In this paper, we present a size-aware algorithm with the time complexity of O(n2-over 1 c k1 over 2c)=o(n2)+O(k), where k is the number of results. The size-aware algorithm divides all the sets into small and large ones based on their sizes and processes them separately. We can use existing methods to process the large sets and focus on the small sets in this paper. We develop several optimization heuristics for the small sets to improve the practical performance significantly. As the size boundary between the small sets and the large sets is crucial to the efficiency, we propose an effective size boundary selection algorithm to judiciously choose an appropriate size boundary, which works very well in practice. Experimental results on real-world datasets show that our methods achieve high performance and outperform the state-of-the-art approaches by up to an order of magnitude.
Chida, Koji, Hamada, Koki, Ikarashi, Dai, Kikuchi, Ryo, Pinkas, Benny.  2018.  High-Throughput Secure AES Computation. Proceedings of the 6th Workshop on Encrypted Computing & Applied Homomorphic Cryptography. :13-24.
This work describes a three-times (\$3$\backslash$times\$) improvement to the performance of secure computation of AES over a network of three parties with an honest majority. The throughput that is achieved is even better than that of computing AES in some scenarios of local (non-private) computation. The performance improvement is achieved through an optimization of the generic secure protocol, and, more importantly, through an optimization of the description of the AES function to support more efficient secure computation, and an optimization of the protocol to the underlying architecture. This demonstrates that the development process of efficient secure computation must include adapting the description of the computed function to be tailored to the protocol, and adapting the implementation of the protocol to the architecture. This work focuses on the secure computation of AES since it has been widely investigated as a de-facto standard performance benchmark for secure computation, and is also important by itself for many applications. Furthermore, parts of the improvements are general and not specific to AES, and can be applied to secure computation of arbitrary functions.
Liu, Tianren, Vaikuntanathan, Vinod.  2018.  Breaking the Circuit-Size Barrier in Secret Sharing. Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. :699-708.
We study secret sharing schemes for general (non-threshold) access structures. A general secret sharing scheme for n parties is associated to a monotone function F:\0,1\n$\rightarrow$\0,1\. In such a scheme, a dealer distributes shares of a secret s among n parties. Any subset of parties T $\subseteq$ [n] should be able to put together their shares and reconstruct the secret s if F(T)=1, and should have no information about s if F(T)=0. One of the major long-standing questions in information-theoretic cryptography is to minimize the (total) size of the shares in a secret-sharing scheme for arbitrary monotone functions F. There is a large gap between lower and upper bounds for secret sharing. The best known scheme for general F has shares of size 2n-o(n), but the best lower bound is $Ømega$(n2/logn). Indeed, the exponential share size is a direct result of the fact that in all known secret-sharing schemes, the share size grows with the size of a circuit (or formula, or monotone span program) for F. Indeed, several researchers have suggested the existence of a representation size barrier which implies that the right answer is closer to the upper bound, namely, 2n-o(n). In this work, we overcome this barrier by constructing a secret sharing scheme for any access structure with shares of size 20.994n and a linear secret sharing scheme for any access structure with shares of size 20.999n. As a contribution of independent interest, we also construct a secret sharing scheme with shares of size 2Õ($\surd$n) for 2n n/2 monotone access structures, out of a total of 2n n/2$\cdot$ (1+O(logn/n)) of them. Our construction builds on recent works that construct better protocols for the conditional disclosure of secrets (CDS) problem.
Sharaieh, A., Edinat, A., AlFarraji, S..  2018.  An Enhanced Polyalphabetic Algorithm on Vigenerecipher with DNA-Based Cryptography. 2018 IEEE/ACS 15th International Conference on Computer Systems and Applications (AICCSA). :1-6.

Several algorithms were introduced in data encryption and decryptionsto protect threats and intruders from stealing and destroying data. A DNA cryptography is a new concept that has attracted great interest in the information security. In this paper, we propose a new enhanced polyalphabetic cipher algorithm (EPCA) as enhanced algorithm for the Vigenere cipher to avoid the limitations and the weakness of Vigenere cipher. A DNA technology is used to convert binary data to DNA strand. We compared the EPCA with Vigenere cipher in terms of memory space and run time. The EPCA has theoretical run time of O(N), at worst case. The EPCA shows better performance in average memory space and closed results in average running time, for the tested data.

Oohama, Y., Santoso, B..  2018.  Information Theoretical Analysis of Side-Channel Attacks to the Shannon Cipher System. 2018 IEEE International Symposium on Information Theory (ISIT). :581-585.
We study side-channel attacks for the Shannon cipher system. To pose side channel-attacks to the Shannon cipher system, we regard them as a signal estimation via encoded data from two distributed sensors. This can be formulated as the one helper source coding problem posed and investigated by Ahlswede, Korner(1975), and Wyner(1975). We further investigate the posed problem to derive new secrecy bounds. Our results are derived by a coupling of the result Watanabe and Oohama(2012) obtained on bounded storage eavesdropper with the exponential strong converse theorem Oohama(2015) established for the one helper source coding problem.
Narayanan, G., Das, J. K., Rajeswari, M., Kumar, R. S..  2018.  Game Theoretical Approach with Audit Based Misbehavior Detection System. 2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT). :1932-1935.
Mobile Ad-hoc Networks are dynamic in nature and do not have fixed infrastructure to govern nodes in the networks. The mission lies ahead in coordinating among such dynamically shifting nodes. The root problem of identifying and isolating misbehaving nodes that refuse to forward packets in multi-hop ad hoc networks is solved by the development of a comprehensive system called Audit-based Misbehavior Detection (AMD) that can efficiently isolates selective and continuous packet droppers. AMD evaluates node behavior on a per-packet basis, without using energy-expensive overhearing techniques or intensive acknowledgment schemes. Moreover, AMD can detect selective dropping attacks even in end-to-end encrypted traffic and can be applied to multi-channel networks. Game theoretical approaches are more suitable in deciding upon the reward mechanisms for which the mobile nodes operate upon. Rewards or penalties have to be decided by ensuring a clean and healthy MANET environment. A non-routine yet surprise alterations are well required in place in deciding suitable and safe reward strategies. This work focuses on integrating a Audit-based Misbehaviour Detection (AMD)scheme and an incentive based reputation scheme with game theoretical approach called Supervisory Game to analyze the selfish behavior of nodes in the MANETs environment. The proposed work GAMD significantly reduces the cost of detecting misbehavior nodes in the network.
Arrazola, J. M., Marwah, A., Lovitz, B., Touchette, D., Lutkenhaus, N..  2018.  Cryptographic and Non-Cryptographic Network Applications and Their Optical Implementations. 2018 IEEE Photonics Society Summer Topical Meeting Series (SUM). :9-10.
The use of quantum mechanical signals in communication opens up the opportunity to build new communication systems that accomplishes tasks that communication with classical signals structures cannot achieve. Prominent examples are Quantum Key Distribution Protocols, which allows the generation of secret keys without computational assumptions of adversaries. Over the past decade, protocols have been developed that achieve tasks that can also be accomplished with classical signals, but the quantum version of the protocol either uses less resources, or leaks less information between the involved parties. The gap between quantum and classical can be exponential in the input size of the problems. Examples are the comparison of data, the scheduling of appointments and others. Until recently, it was thought that these protocols are of mere conceptual value, but that the quantum advantage could not be realized. We changed that by developing quantum optical versions of these abstract protocols that can run with simple laser pulses, beam-splitters and detectors. [1-3] By now the first protocols have been successfully implemented [4], showing that a quantum advantage can be realized. The next step is to find and realize protocols that have a high practical value.
Zhang, F., Dong, X., Zhao, X., Wang, Y., Qureshi, S., Zhang, Y., Lou, X., Tang, Y..  2018.  Theoretical Round Modification Fault Analysis on AEGIS-128 with Algebraic Techniques. 2018 IEEE 15th International Conference on Mobile Ad Hoc and Sensor Systems (MASS). :335-343.
This paper proposed an advanced round modification fault analysis (RMFA) at the theoretical level on AEGIS-128, which is one of seven finalists in CAESAR competition. First, we clarify our assumptions and simplifications on the attack model, focusing on the encryption security. Then, we emphasize the difficulty of applying vanilla RMFA to AEGIS-128 in the practical case. Finally we demonstrate our advanced fault analysis on AEGIS-128 using machine-solver based algebraic techniques. Our enhancement can be used to conquer the practical scenario which is difficult for vanilla RMFA. Simulation results show that when the fault is injected to the initialization phase and the number of rounds is reduced to one, two samples of injections can extract the whole 128 key bits within less than two hours. This work can also be extended to other versions such as AEGIS-256.
Dr\u agoi, V., Richmond, T., Bucerzan, D., Legay, A..  2018.  Survey on Cryptanalysis of Code-Based Cryptography: From Theoretical to Physical Attacks. 2018 7th International Conference on Computers Communications and Control (ICCCC). :215-223.
Nowadays public-key cryptography is based on number theory problems, such as computing the discrete logarithm on an elliptic curve or factoring big integers. Even though these problems are considered difficult to solve with the help of a classical computer, they can be solved in polynomial time on a quantum computer. Which is why the research community proposed alternative solutions that are quantum-resistant. The process of finding adequate post-quantum cryptographic schemes has moved to the next level, right after NIST's announcement for post-quantum standardization. One of the oldest quantum-resistant proposition goes back to McEliece in 1978, who proposed a public-key cryptosystem based on coding theory. It benefits of really efficient algorithms as well as a strong mathematical background. Nonetheless, its security has been challenged many times and several variants were cryptanalyzed. However, some versions remain unbroken. In this paper, we propose to give some background on coding theory in order to present some of the main flawless in the protocols. We analyze the existing side-channel attacks and give some recommendations on how to securely implement the most suitable variants. We also detail some structural attacks and potential drawbacks for new variants.
2018-04-11
Assadi, Sepehr, Khanna, Sanjeev.  2017.  Randomized Composable Coresets for Matching and Vertex Cover. Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures. :3–12.

A common approach for designing scalable algorithms for massive data sets is to distribute the computation across, say k, machines and process the data using limited communication between them. A particularly appealing framework here is the simultaneous communication model whereby each machine constructs a small representative summary of its own data and one obtains an approximate/exact solution from the union of the representative summaries. If the representative summaries needed for a problem are small, then this results in a communication-efficient and $\backslash$emph\round-optimal\ (requiring essentially no interaction between the machines) protocol. Some well-known examples of techniques for creating summaries include sampling, linear sketching, and composable coresets. These techniques have been successfully used to design communication efficient solutions for many fundamental graph problems. However, two prominent problems are notably absent from the list of successes, namely, the maximum matching problem and the minimum vertex cover problem. Indeed, it was shown recently that for both these problems, even achieving a modest approximation factor of $\backslash$polylog\(n)\ requires using representative summaries of size $\backslash$widetilde\$\backslash$Omega\(ntextasciicircum2) i.e. essentially no better summary exists than each machine simply sending its entire input graph. The main insight of our work is that the intractability of matching and vertex cover in the simultaneous communication model is inherently connected to an adversarial partitioning of the underlying graph across machines. We show that when the underlying graph is randomly partitioned across machines, both these problems admit $\backslash$emph\randomized composable coresets\ of size $\backslash$widetildeØ\(n) that yield an $\backslash$widetildeØ\(1)-approximate solution$\backslash$footnote\Here and throughout the paper, we use $\backslash$Ot($\backslash$cdot) notation to suppress $\backslash$polylog\(n)\ factors, where n is the number of vertices in the graph. In other words, a small subgraph of the input graph at each machine can be identified as its representative summary and the final answer then is obtained by simply running any maximum matching or minimum vertex cover algorithm on these combined subgraphs. This results in an Õ(1)-approximation simultaneous protocol for these problems with Õ(nk) total communication when the input is randomly partitioned across k machines. We also prove our results are optimal in a very strong sense: we not only rule out existence of smaller randomized composable coresets for these problems but in fact show that our $\backslash$Ot(nk) bound for total communication is optimal for em any simultaneous communication protocol (i.e. not only for randomized coresets) for these two problems. Finally, by a standard application of composable coresets, our results also imply MapReduce algorithms with the same approximation guarantee in one or two rounds of communication, improving the previous best known round complexity for these problems.\vphantom\

Li, Jason, O'Donnell, Ryan.  2017.  Bounding Laconic Proof Systems by Solving CSPs in Parallel. Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures. :95–100.

We show that the basic semidefinite programming relaxation value of any constraint satisfaction problem can be computed in NC; that is, in parallel polylogarithmic time and polynomial work. As a complexity-theoretic consequence we get that $\backslash$MIPone[k,c,s] $\backslash$subseteq $\backslash$PSPACE provided s/c $\backslash$leq (.62-o(1))k/2textasciicircumk, resolving a question of Austrin, H$\backslash$aa stad, and Pass. Here $\backslash$MIPone[k,c,s] is the class of languages decidable with completeness c and soundness s by an interactive proof system with k provers, each constrained to communicate just 1 bit.

Hoang, Thang, Ozkaptan, Ceyhun D., Yavuz, Attila A., Guajardo, Jorge, Nguyen, Tam.  2017.  S3ORAM: A Computation-Efficient and Constant Client Bandwidth Blowup ORAM with Shamir Secret Sharing. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :491–505.

Oblivious Random Access Machine (ORAM) enables a client to access her data without leaking her access patterns. Existing client-efficient ORAMs either achieve O(log N) client-server communication blowup without heavy computation, or O(1) blowup but with expensive homomorphic encryptions. It has been shown that O(log N) bandwidth blowup might not be practical for certain applications, while schemes with O(1) communication blowup incur even more delay due to costly homomorphic operations. In this paper, we propose a new distributed ORAM scheme referred to as Shamir Secret Sharing ORAM (S3ORAM), which achieves O(1) client-server bandwidth blowup and O(1) blocks of client storage without relying on costly partial homomorphic encryptions. S3ORAM harnesses Shamir Secret Sharing, tree-based ORAM structure and a secure multi-party multiplication protocol to eliminate costly homomorphic operations and, therefore, achieves O(1) client-server bandwidth blowup with a high computational efficiency. We conducted comprehensive experiments to assess the performance of S3ORAM and its counterparts on actual cloud environments, and showed that S3ORAM achieves three orders of magnitude lower end-to-end delay compared to alternatives with O(1) client communication blowup (Onion-ORAM), while it is one order of magnitude faster than Path-ORAM for a network with a moderate bandwidth quality. We have released the implementation of S3ORAM for further improvement and adaptation.

Harkanson, R., Kim, Y..  2017.  Applications of Elliptic Curve Cryptography: A Light Introduction to Elliptic Curves and a Survey of Their Applications. Proceedings of the 12th Annual Conference on Cyber and Information Security Research. :6:1–6:7.

Elliptic curve cryptography (ECC) is a relatively newer form of public key cryptography that provides more security per bit than other forms of cryptography still being used today. We explore the mathematical structure and operations of elliptic curves and how those properties make curves suitable tools for cryptography. A brief historical context is given followed by the safety of usage in production, as not all curves are free from vulnerabilities. Next, we compare ECC with other popular forms of cryptography for both key exchange and digital signatures, in terms of security and speed. Traditional applications of ECC, both theoretical and in-practice, are presented, including key exchange for web browser usage and DNSSEC. We examine multiple uses of ECC in a mobile context, including cellular phones and the Internet of Things. Modern applications of curves are explored, such as iris recognition, RFID, smart grid, as well as an application for E-health. Finally, we discuss how ECC stacks up in a post-quantum cryptography world.

Alderman, James, Crampton, Jason, Farley, Naomi.  2017.  A Framework for the Cryptographic Enforcement of Information Flow Policies. Proceedings of the 22Nd ACM on Symposium on Access Control Models and Technologies. :143–154.

It is increasingly common to outsource data storage to untrusted, third party (e.g. cloud) servers. However, in such settings, low-level online reference monitors may not be appropriate for enforcing read access, and thus cryptographic enforcement schemes (CESs) may be required. Much of the research on cryptographic access control has focused on the use of specific primitives and, primarily, on how to generate appropriate keys and fails to model the access control system as a whole. Recent work in the context of role-based access control has shown a gap between theoretical policy specification and computationally secure implementations of access control policies, potentially leading to insecure implementations. Without a formal model, it is hard to (i) reason about the correctness and security of a CES, and (ii) show that the security properties of a particular cryptographic primitive are sufficient to guarantee security of the CES as a whole. In this paper, we provide a rigorous definitional framework for a CES that enforces read-only information flow policies (which encompass many practical forms of access control, including role-based policies). This framework (i) provides a tool by which instantiations of CESs can be proven correct and secure, (ii) is independent of any particular cryptographic primitives used to instantiate a CES, and (iii) helps to identify the limitations of current primitives (e.g. key assignment schemes) as components of a CES.

Zhang, Hao, Zhang, Tao, Chen, Huajin.  2017.  Variance Analysis of Pixel-Value Differencing Steganography. Proceedings of the 2017 International Conference on Cryptography, Security and Privacy. :28–32.

As the adaptive steganography selects edge and texture area for loading, the theoretical analysis is limited by modeling difficulty. This paper introduces a novel method to study pixel-value difference (PVD) embedding scheme. First, the difference histogram values of cover image are used as parameters, and a variance formula for PVD stego noise is obtained. The accuracy of this formula has been verified through analysis with standard pictures. Second, the stego noise is divided into six kinds of pixel regions, and the regional noise variances are utilized to compare the security between PVD and least significant bit matching (LSBM) steganography. A mathematical conclusion is presented that, with the embedding capacity less than 2.75 bits per pixel, PVD is always not safer than LSBM under the same embedding rate, regardless of region selection. Finally, 10000 image samples are used to observe the validity of mathematical conclusion. For most images and regions, the data are also shown to be consistent with the prior judgment. Meanwhile, the cases of exception are analyzed seriously, and are found to be caused by randomness of pixel selection and abandoned blocks in PVD scheme. In summary, the unity of theory and practice completely indicates the effectiveness of our new method.

Picek, Stjepan, Mariot, Luca, Yang, Bohan, Jakobovic, Domagoj, Mentens, Nele.  2017.  Design of S-Boxes Defined with Cellular Automata Rules. Proceedings of the Computing Frontiers Conference. :409–414.

The aim of this paper is to find cellular automata (CA) rules that are used to describe S-boxes with good cryptographic properties and low implementation cost. Up to now, CA rules have been used in several ciphers to define an S-box, but in all those ciphers, the same CA rule is used. This CA rule is best known as the one defining the Keccak $\chi$ transformation. Since there exists no straightforward method for constructing CA rules that define S-boxes with good cryptographic/implementation properties, we use a special kind of heuristics for that – Genetic Programming (GP). Although it is not possible to theoretically prove the efficiency of such a method, our experimental results show that GP is able to find a large number of CA rules that define good S-boxes in a relatively easy way. We focus on the 4 x 4 and 5 x 5 sizes and we implement the S-boxes in hardware to examine implementation properties like latency, area, and power. Particularly interesting is the internal encoding of the solutions in the considered heuristics using combinatorial circuits; this makes it easy to approximate S-box implementation properties like latency and area a priori.

Goldwasser, Shafi, Park, Sunoo.  2017.  Public Accountability vs. Secret Laws: Can They Coexist?: A Cryptographic Proposal Proceedings of the 2017 on Workshop on Privacy in the Electronic Society. :99–110.

"Our Laws are not generally known; they are kept secret by the small group of nobles who rule us. We are convinced that these ancient laws are scrupulously administered; nevertheless it is an extremely painful thing to be ruled by laws that one does not know."–Franz Kafka, Parables and Paradoxes. Post 9/11, journalists, scholars and activists have pointed out that it secret laws - a body of law whose details and sometime mere existence is classified as top secret - were on the rise in all three branches of the US government due to growing national security concerns. Amid heated current debates on governmental wishes for exceptional access to encrypted digital data, one of the key issues is: which mechanisms can be put in place to ensure that government agencies follow agreed-upon rules in a manner which does not compromise national security objectives? This promises to be especially challenging when the rules, according to which access to encrypted data is granted, may themselves be secret. In this work we show how the use of cryptographic protocols, and in particular, the idea of zero knowledge proofs can ensure accountability and transperancy of the government in this extraordinary, seemingly deadlocked, setting. We propose an efficient record-keeping infrastructure with versatile publicly verifiable audits that preserve (information-theoretic) privacy of record contents as well as of the rules by which the records are attested to abide. Our protocol is based on existing blockchain and cryptographic tools including commitments and zero-knowledge SNARKs, and satisfies the properties of indelibility (i.e., no back-dating), perfect data privacy, public auditability of secret data with secret laws, accountable deletion, and succinctness. We also propose a variant scheme where entities can be required to pay fees based on record contents (e.g., for violating regulations) while still preserving privacy. Our scheme can be directly instantiated on the Ethereum blockchain (and a simplified version with weaker guarantees can be instantiated with Bitcoin).

2018-01-10
Robyns, Pieter, Quax, Peter, Lamotte, Wim.  2017.  PHY-layer Security is No Alternative to Cryptography. Proceedings of the 10th ACM Conference on Security and Privacy in Wireless and Mobile Networks. :160–162.

In recent works, numerous physical-layer security systems have been proposed as alternatives to classic cryptography. Such systems aim to use the intrinsic properties of radio signals and the wireless medium to provide confidentiality and authentication to wireless devices. However, fundamental vulnerabilities are often discovered in these systems shortly after their inception. We therefore challenge the assumptions made by existing physical-layer security systems, and postulate that weaker assumptions are needed in order to adapt for practical scenarios. We also argue that if no computational advantage over an adversary can be ensured, secure communication cannot be realistically achieved.

Aman, Muhammad Naveed, Chua, Kee Chaing, Sikdar, Biplab.  2017.  Secure Data Provenance for the Internet of Things. Proceedings of the 3rd ACM International Workshop on IoT Privacy, Trust, and Security. :11–14.

The vision of smart environments, systems, and services is driven by the development of the Internet of Things (IoT). IoT devices produce large amounts of data and this data is used to make critical decisions in many systems. The data produced by these devices has to satisfy various security related requirements in order to be useful in practical scenarios. One of these requirements is data provenance which allows a user to trust the data regarding its origin and location. The low cost of many IoT devices and the fact that they may be deployed in unprotected spaces requires security protocols to be efficient and secure against physical attacks. This paper proposes a light-weight protocol for data provenance in the IoT. The proposed protocol uses physical unclonable functions (PUFs) to provide physical security and uniquely identify an IoT device. Moreover, wireless channel characteristics are used to uniquely identify a wireless link between an IoT device and a server/user. A brief security and performance analysis are presented to give a preliminary validation of the protocol.

2017-07-24
Chen, Jing, McCauley, Samuel, Singh, Shikha.  2016.  Rational Proofs with Multiple Provers. Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. :237–248.

Interactive proofs model a world where a verifier delegates computation to an untrustworthy prover, verifying the prover's claims before accepting them. These proofs have applications to delegation of computation, probabilistically checkable proofs, crowdsourcing, and more. In some of these applications, the verifier may pay the prover based on the quality of his work. Rational proofs, introduced by Azar and Micali (2012), are an interactive proof model in which the prover is rational rather than untrustworthy–-he may lie, but only to increase his payment. This allows the verifier to leverage the greed of the prover to obtain better protocols: while rational proofs are no more powerful than interactive proofs, the protocols are simpler and more efficient. Azar and Micali posed as an open problem whether multiple provers are more powerful than one for rational proofs. We provide a model that extends rational proofs to allow multiple provers. In this model, a verifier can cross-check the answers received by asking several provers. The verifier can pay the provers according to the quality of their work, incentivizing them to provide correct information. We analyze rational proofs with multiple provers from a complexity-theoretic point of view. We fully characterize this model by giving tight upper and lower bounds on its power. On the way, we resolve Azar and Micali's open problem in the affirmative, showing that multiple rational provers are strictly more powerful than one (under standard complexity-theoretic assumptions). We further show that the full power of rational proofs with multiple provers can be achieved using only two provers and five rounds of interaction. Finally, we consider more demanding models where the verifier wants the provers' payment to decrease significantly when they are lying, and fully characterize the power of the model when the payment gap must be noticeable (i.e., at least 1/p where p is a polynomial).