Visible to the public Biblio

Found 15086 results

Filters: Keyword is pubcrawl  [Clear All Filters]
2017-07-24
Aljamea, Moudhi M., Brankovic, Ljiljana, Gao, Jia, Iliopoulos, Costas S., Samiruzzaman, M..  2016.  Smart Meter Data Analysis. Proceedings of the International Conference on Internet of Things and Cloud Computing. :22:1–22:6.

Providing a global understanding of privacy is crucial, because everything is connected. Nowadays companies are providing their customers with more services that will give them more access to their data and daily activity; electricity companies are marketing the new smart meters as a new service with great benefit to reduce the electricity usage by monitoring the electricity reading in real time. Although the users might benefit from this extra service, it will compromise the privacy of the users by having constant access to the readings. Since the smart meters will provide the users with real electricity readings, they will be able to decide and identify which devices are consuming energy in that specific moment and how much it will cost. This kind of information can be exploited by numerous types of people. Unauthorized use of this information is an invasion of privacy and may lead to much more severe consequences. This paper will propose an algorithm approach for the comparison and analysis of Smart Meter data readings, considering the time and temperature factors at each second to identify the use patterns at each house by identifying the appliances activities at each second in time complexity O(log(m)).

Karasevich, Aleksandr M., Tutnov, Igor A., Baryshev, Gennady K..  2016.  The Prospects of Application of Information Technologies and the Principles of Intelligent Automated Systems to Manage the Security Status of Objects of Energy Supply of Smart Cities. Proceedings of the International Conference on Electronic Governance and Open Society: Challenges in Eurasia. :9–14.

The paper focuses on one of the methods of designing a highly-automated hardware-software complex aimed at controlling the security of power grids and units that support both central heating and power systems of smart cities. We understand this condition as a situation when any energy consumers of smart cities will be provided with necessary for their living amounts of energy and fuel at any time, including possible periods of techno genic and natural hazards. Two main scientific principles lie in the base of the approach introduced. The first one is diversification of risks of energy security of smart cities by rational choosing the different energy generation sources ratio for fuel-energy balance of a smart city, including large fuel electric power plants and small power autonomous generators. For example, they can be wind energy machinery of sun collectors, heat pipes, etc. The second principle is energy efficiency and energy saving of smart cities. In our case this principle is realized by the high level of automation of monitoring and operation of security status of energy systems and complexes that provide the consumers of smart cities with heat, hot water and electricity, as well as by preventive alert of possible emergencies and high reliability of functioning of all energy facilities. We formulate the main principle governing the construction of a smart hardware-software complex used to maintain a highly-automated control over risks connected with functioning of both power sources and transmission grids. This principle is for open block architecture, including highly autonomous block-modules of primary registration of measuring information, data analysis and systems of automated operation. It also describes general IT-tools used to control the risks of supplying smart cities with energy and shows the structure of a highly-automated system designed to select technological and managerial solutions for a smart city's energy supply system.

Liao, Xiaojing, Yuan, Kan, Wang, XiaoFeng, Li, Zhou, Xing, Luyi, Beyah, Raheem.  2016.  Acing the IOC Game: Toward Automatic Discovery and Analysis of Open-Source Cyber Threat Intelligence. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :755–766.

To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., "download") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., "malware", "download") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95% and a coverage over 90%, which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.

Cao, Phuong, Badger, Eric C., Kalbarczyk, Zbigniew T., Iyer, Ravishankar K..  2016.  A Framework for Generation, Replay, and Analysis of Real-world Attack Variants. Proceedings of the Symposium and Bootcamp on the Science of Security. :28–37.

This paper presents a framework for (1) generating variants of known attacks, (2) replaying attack variants in an isolated environment and, (3) validating detection capabilities of attack detection techniques against the variants. Our framework facilitates reproducible security experiments. We generated 648 variants of three real-world attacks (observed at the National Center for Supercomputing Applications at the University of Illinois). Our experiment showed the value of generating attack variants by quantifying the detection capabilities of three detection methods: a signature-based detection technique, an anomaly-based detection technique, and a probabilistic graphical model-based technique.

Chen, Chen, Suciu, Darius, Sion, Radu.  2016.  POSTER: KXRay: Introspecting the Kernel for Rootkit Timing Footprints. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1781–1783.

Kernel rootkits often hide associated malicious processes by altering reported task struct information to upper layers and applications such as ps and top. Virtualized settings offer a unique opportunity to mitigate this behavior using dynamic virtual machine introspection (VMI). For known kernels, VMI can be deployed to search for kernel objects and identify them by using unique data structure "signatures". In existing work, VMI-detected data structure signatures are based on values and structural features which must be (often exactly) present in memory snapshots taken, for accurate detection. This features a certain brittleness and rootkits can escape detection by simply temporarily "un-tangling" the corresponding structures when not running. Here we introduce a new paradigm, that defeats such behavior by training for and observing signatures of timing access patterns to any and all kernel-mapped data regions, including objects that are not directly linked in the "official" list of tasks. The use of timing information in training detection signatures renders the defenses resistant to attacks that try to evade detection by removing their corresponding malicious processes before scans. KXRay successfully detected processes hidden by four traditional rootkits.

Kolesnikov, Vladimir, Krawczyk, Hugo, Lindell, Yehuda, Malozemoff, Alex, Rabin, Tal.  2016.  Attribute-based Key Exchange with General Policies. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1451–1463.

Attribute-based methods provide authorization to parties based on whether their set of attributes (e.g., age, organization, etc.) fulfills a policy. In attribute-based encryption (ABE), authorized parties can decrypt, and in attribute-based credentials (ABCs), authorized parties can authenticate themselves. In this paper, we combine elements of ABE and ABCs together with garbled circuits to construct attribute-based key exchange (ABKE). Our focus is on an interactive solution involving a client that holds a certificate (issued by an authority) vouching for that client's attributes and a server that holds a policy computable on such a set of attributes. The goal is for the server to establish a shared key with the client but only if the client's certified attributes satisfy the policy. Our solution enjoys strong privacy guarantees for both the client and the server, including attribute privacy and unlinkability of client sessions. Our main contribution is a construction of ABKE for arbitrary circuits with high (concrete) efficiency. Specifically, we support general policies expressible as boolean circuits computed on a set of attributes. Even for policies containing hundreds of thousands of gates the performance cost is dominated by two pairing computations per policy input. Put another way, for a similar cost to prior ABE/ABC solutions, which can only support small formulas efficiently, we can support vastly richer policies. We implemented our solution and report on its performance. For policies with 100,000 gates and 200 inputs over a realistic network, the server and client spend 957 ms and 176 ms on computation, respectively. When using offline preprocessing and batch signature verification, this drops to only 243 ms and 97 ms.

Haider, Ihtesham, Höberl, Michael, Rinner, Bernhard.  2016.  Trusted Sensors for Participatory Sensing and IoT Applications Based on Physically Unclonable Functions. Proceedings of the 2Nd ACM International Workshop on IoT Privacy, Trust, and Security. :14–21.

With the emergence of the internet of things (IoT) and participatory sensing (PS) paradigms trustworthiness of remotely sensed data has become a vital research question. In this work, we present the design of a trusted sensor, which uses physically unclonable functions (PUFs) as anchor to ensure integrity, authenticity and non-repudiation guarantees on the sensed data. We propose trusted sensors for mobile devices to address the problem of potential manipulation of mobile sensors' readings by exploiting vulnerabilities of mobile device OS in participatory sensing for IoT applications. Preliminary results from our implementation of trusted visual sensor node show that the proposed security solution can be realized without consuming significant amount of resources of the sensor node.

Li, Jing, Wang, Licheng, Zhang, Zonghua, Niu, Xinxin.  2016.  Novel Constructions of Cramer-Shoup Like Cryptosystems Based on Index Exchangeable Family. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :895–900.

The Cramer-Shoup cryptosystem has attracted much attention from the research community, mainly due to its efficiency in encryption/decryption, as well as the provable reductions of security against adaptively chosen ciphertext attacks in the standard model. At TCC 2005, Vasco et al. proposed a method for building Cramer-Shoup like cryptosystem over non-abelian groups and raised an open problem for finding a secure instantiation. Based on this work, we present another general framework for constructing Cramer-Shoup like cryptosystems. We firstly propose the concept of index exchangeable family (IEF) and an abstract construction of Cramer-Shoup like encryption scheme over IEF. The concrete instantiations of IEF are then derived from some reasonable hardness assumptions over abelian groups as well as non-abelian groups, respectively. These instantiations ultimately lead to simple yet efficient constructions of Cramer-Shoup like cryptosystems, including new non-abelian analogies that can be potential solutions to Vasco et al.'s open problem. Moreover, we propose a secure outsourcing method for the encryption of the non-abelian analog based on the factorization problem over non-commutative groups. The experiments clearly indicate that the computational cost of our outsourcing scheme can be significantly reduced thanks to the load sharing with cloud datacenter servers.

Xu, Peng, Li, Jingnan, Wang, Wei, Jin, Hai.  2016.  Anonymous Identity-Based Broadcast Encryption with Constant Decryption Complexity and Strong Security. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :223–233.

Anonymous Identity-Based Broadcast Encryption (AIBBE) allows a sender to broadcast a ciphertext to multi-receivers, and keeps receivers' anonymity. The existing AIBBE schemes fail to achieve efficient decryption or strong security, like the constant decryption complexity, the security under the adaptive attack, or the security in the standard model. Hence, we propose two new AIBBE schemes to overcome the drawbacks of previous schemes in the state-of-art. The biggest contribution in our work is the proposed AIBBE scheme with constant decryption complexity and the provable security under the adaptive attack in the standard model. This scheme should be the first one to obtain advantages in all above mentioned aspects, and has sufficient contribution in theory due to its strong security. We also propose another AIBBE scheme in the Random Oracle (RO) model, which is of sufficient interest in practice due to our experiment.

Dang, Hung, Chong, Yun Long, Brun, Francois, Chang, Ee-Chien.  2016.  Practical and Scalable Sharing of Encrypted Data in Cloud Storage with Key Aggregation. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :69–80.

We study a sensor network setting in which samples are encrypted individually using different keys and maintained on a cloud storage. For large systems, e.g. those that generate several millions of samples per day, fine-grained sharing of encrypted samples is challenging. Existing solutions, such as Attribute-Based Encryption (ABE) and Key Aggregation Cryptosystem (KAC), can be utilized to address the challenge, but only to a certain extent. They are often computationally expensive and thus unlikely to operate at scale. We propose an algorithmic enhancement and two heuristics to improve KAC's key reconstruction cost, while preserving its provable security. The improvement is particularly significant for range and down-sampling queries – accelerating the reconstruction cost from quadratic to linear running time. Experimental study shows that for queries of size 32k samples, the proposed fast reconstruction techniques speed-up the original KAC by at least 90 times on range and down-sampling queries, and by eight times on general (arbitrary) queries. It also shows that at the expense of splitting the query into 16 sub-queries and correspondingly issuing that number of different aggregated keys, reconstruction time can be reduced by 19 times. As such, the proposed techniques make KAC more applicable in practical scenarios such as sensor networks or the Internet of Things.

Li, Meng, Shamsi, Kaveh, Meade, Travis, Zhao, Zheng, Yu, Bei, Jin, Yier, Pan, David Z..  2016.  Provably Secure Camouflaging Strategy for IC Protection. Proceedings of the 35th International Conference on Computer-Aided Design. :28:1–28:8.

The advancing of reverse engineering techniques has complicated the efforts in intellectual property protection. Proactive methods have been developed recently, among which layout-level IC camouflaging is the leading example. However, existing camouflaging methods are rarely supported by provably secure criteria, which further leads to over-estimation of the security level when countering the latest de-camouflaging attacks, e.g., the SAT-based attack. In this paper, a quantitative security criterion is proposed for de-camouflaging complexity measurements and formally analyzed through the demonstration of the equivalence between the existing de-camouflaging strategy and the active learning scheme. Supported by the new security criterion, two novel camouflaging techniques are proposed, the low-overhead camouflaging cell library and the AND-tree structure, to help achieve exponentially increasing security levels at the cost of linearly increasing performance overhead on the circuit under protection. A provably secure camouflaging framework is then developed by combining these two techniques. Experimental results using the security criterion show that the camouflaged circuits with the proposed framework are of high resilience against the SAT-based attack with negligible performance overhead.

Berndt, Sebastian, Liśkiewicz, Maciej.  2016.  Provable Secure Universal Steganography of Optimal Rate: Provably Secure Steganography Does Not Necessarily Imply One-Way Functions. Proceedings of the 4th ACM Workshop on Information Hiding and Multimedia Security. :81–92.

We present the first complexity-theoretic secure steganographic protocol which, for any communication channel, is provably secure, reliable, and has nearly optimal bandwidth. Our system is unconditionally secure, i.e. our proof does not rely on any unproven complexity-theoretic assumption, like e.g. the existence of one-way functions. This disproves the claim that the existence of one-way functions and access to a communication channel oracle are both necessary and sufficient conditions for the existence of secure steganography, in the sense that secure and reliable steganography exists independently of the existence of one-way functions.

Zhenfeng Zhang, Kang Yang, Xuexian Hu, Yuchen Wang.  2016.  Practical Anonymous Password Authentication and TLS with Anonymous Client Authentication.

Anonymous authentication allows one to authenticate herself without revealing her identity, and becomes an important technique for constructing privacy-preserving Internet connections. Anonymous password authentication is highly desirable as it enables a client to authenticate herself by a human-memorable password while preserving her privacy. In this paper, we introduce a novel approach for designing anonymous password-authenticated key exchange (APAKE) protocols using algebraic message authentication codes (MACs), where an algebraic MAC wrapped by a password is used by a client for anonymous authentication, and a server issues algebraic MACs to clients and acts as the verifier of login protocols. Our APAKE construction is secure provided that the algebraic MAC is strongly existentially unforgeable under random message and chosen verification queries attack (suf-rmva), weak pseudorandom and tag-randomization simulatable, and has simulation-sound extractable non-interactive zero-knowledge proofs (SE-NIZKs). To design practical APAKE protocols, we instantiate an algebraic MAC based on the q-SDH assumption which satisfies all the required properties, and construct credential presentation algorithms for the MAC which have optimal efficiency for a randomize-then-prove paradigm. Based on the algebraic MAC, we instantiate a highly practical APAKE protocol and denote it by APAKE, which is much more efficient than the mechanisms specified by ISO/IEC 20009-4. An efficient revocation mechanism for APAKE is also proposed.

We integrate APAKE into TLS to present an anonymous client authentication mode where clients holding passwords can authenticate themselves to a server anonymously. Our implementation with 128-bit security shows that the average connection time of APAKE-based ciphersuite is 2.8 ms. With APAKE integrated into the OpenSSL library and using an Apache web server on a 2-core desktop computer, we could serve 953 ECDHE-ECDSA-AES128-GCM-SHA256 HTTPS connections per second for a 10 KB payload. Compared to ECDSA-signed elliptic curve Diffie-Hellman ciphersuite with mutual authentication, this means a 0.27 KB increased handshake size and a 13% reduction in throughput.

Golla, Maximilian, Beuscher, Benedict, Dürmuth, Markus.  2016.  On the Security of Cracking-Resistant Password Vaults. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1230–1241.

Password vaults are used to store login credentials, usually encrypted by a master password, relieving the user from memorizing a large number of complex passwords. To manage accounts on multiple devices, vaults are often stored at an online service, which substantially increases the risk of leaking the (encrypted) vault. To protect the master password against guessing attacks, previous work has introduced cracking-resistant password vaults based on Honey Encryption. If decryption is attempted with a wrong master password, they output plausible-looking decoy vaults, thus seemingly disabling offline guessing attacks. In this work, we propose attacks against cracking-resistant password vaults that are able to distinguish between real and decoy vaults with high accuracy and thus circumvent the offered protection. These attacks are based on differences in the generated distribution of passwords, which are measured using Kullback-Leibler divergence. Our attack is able to rank the correct vault into the 1.3% most likely vaults (on median), compared to 37.8% of the best-reported attack in previous work. (Note that smaller ranks are better, and 50% is achievable by random guessing.) We demonstrate that this attack is, to a certain extent, a fundamental problem with all static Natural Language Encoders (NLE), where the distribution of decoy vaults is fixed. We propose the notion of adaptive NLEs and demonstrate that they substantially limit the effectiveness of such attacks. We give one example of an adaptive NLE based on Markov models and show that the attack is only able to rank the decoy vaults with a median rank of 35.1%.

Fang, Fuyang, Li, Bao, Lu, Xianhui, Liu, Yamin, Jia, Dingding, Xue, Haiyang.  2016.  (Deterministic) Hierarchical Identity-based Encryption from Learning with Rounding over Small Modulus. Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security. :907–912.

In this paper, we propose a hierarchical identity-based encryption (HIBE) scheme in the random oracle (RO) model based on the learning with rounding (LWR) problem over small modulus \$q\$. Compared with the previous HIBE schemes based on the learning with errors (LWE) problem, the ciphertext expansion ratio of our scheme can be decreased to 1/2. Then, we utilize the HIBE scheme to construct a deterministic hierarchical identity-based encryption (D-HIBE) scheme based on the LWR problem over small modulus. Finally, with the technique of binary tree encryption (BTE) we can construct HIBE and D-HIBE schemes in the standard model based on the LWR problem over small modulus.

Bost, Raphael.  2016.  ∑O\$\textbackslashphi\$Oς: Forward Secure Searchable Encryption. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1143–1154.

Searchable Symmetric Encryption aims at making possible searching over an encrypted database stored on an untrusted server while keeping privacy of both the queries and the data, by allowing some small controlled leakage to the server. Recent work shows that dynamic schemes – in which the data is efficiently updatable – leaking some information on updated keywords are subject to devastating adaptative attacks breaking the privacy of the queries. The only way to thwart this attack is to design forward private schemes whose update procedure does not leak if a newly inserted element matches previous search queries. This work proposes Sophos as a forward private SSE scheme with performance similar to existing less secure schemes, and that is conceptually simpler (and also more efficient) than previous forward private constructions. In particular, it only relies on trapdoor permutations and does not use an ORAM-like construction. We also explain why Sophos is an optimal point of the security/performance tradeoff for SSE. Finally, an implementation and evaluation results demonstrate its practical efficiency.

Roche, Daniel S., Apon, Daniel, Choi, Seung Geol, Yerukhimovich, Arkady.  2016.  POPE: Partial Order Preserving Encoding. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1131–1142.

Recently there has been much interest in performing search queries over encrypted data to enable functionality while protecting sensitive data. One particularly efficient mechanism for executing such queries is order-preserving encryption/encoding (OPE) which results in ciphertexts that preserve the relative order of the underlying plaintexts thus allowing range and comparison queries to be performed directly on ciphertexts. Recently, Popa et al. (SP 2013) gave the first construction of an ideally-secure OPE scheme and Kerschbaum (CCS 2015) showed how to achieve the even stronger notion of frequency-hiding OPE. However, as Naveed et al. (CCS 2015) have recently demonstrated, these constructions remain vulnerable to several attacks. Additionally, all previous ideal OPE schemes (with or without frequency-hiding) either require a large round complexity of O(log n) rounds for each insertion, or a large persistent client storage of size O(n), where n is the number of items in the database. It is thus desirable to achieve a range query scheme addressing both issues gracefully. In this paper, we propose an alternative approach to range queries over encrypted data that is optimized to support insert-heavy workloads as are common in "big data" applications while still maintaining search functionality and achieving stronger security. Specifically, we propose a new primitive called partial order preserving encoding (POPE) that achieves ideal OPE security with frequency hiding and also leaves a sizable fraction of the data pairwise incomparable. Using only O(1) persistent and O(ne) non-persistent client storage for 0(1-e)) search queries. This improved security and performance makes our scheme better suited for today's insert-heavy databases.

Melis, Luca, Asghar, Hassan Jameel, De Cristofaro, Emiliano, Kaafar, Mohamed Ali.  2016.  Private Processing of Outsourced Network Functions: Feasibility and Constructions. Proceedings of the 2016 ACM International Workshop on Security in Software Defined Networks & Network Function Virtualization. :39–44.

Aiming to reduce the cost and complexity of maintaining networking infrastructures, organizations are increasingly outsourcing their network functions (e.g., firewalls, traffic shapers and intrusion detection systems) to the cloud, and a number of industrial players have started to offer network function virtualization (NFV)-based solutions. Alas, outsourcing network functions in its current setting implies that sensitive network policies, such as firewall rules, are revealed to the cloud provider. In this paper, we investigate the use of cryptographic primitives for processing outsourced network functions, so that the provider does not learn any sensitive information. More specifically, we present a cryptographic treatment of privacy-preserving outsourcing of network functions, introducing security definitions as well as an abstract model of generic network functions, and then propose a few instantiations using partial homomorphic encryption and public-key encryption with keyword search. We include a proof-of-concept implementation of our constructions and show that network functions can be privately processed by an untrusted cloud provider in a few milliseconds.

Durak, F. Betül, DuBuisson, Thomas M., Cash, David.  2016.  What Else is Revealed by Order-Revealing Encryption? Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1155–1166.

The security of order-revealing encryption (ORE) has been unclear since its invention. Dataset characteristics for which ORE is especially insecure have been identified, such as small message spaces and low-entropy distributions. On the other hand, properties like one-wayness on uniformly-distributed datasets have been proved for ORE constructions. This work shows that more plaintext information can be extracted from ORE ciphertexts than was previously thought. We identify two issues: First, we show that when multiple columns of correlated data are encrypted with ORE, attacks can use the encrypted columns together to reveal more information than prior attacks could extract from the columns individually. Second, we apply known attacks, and develop new attacks, to show that the leakage of concrete ORE schemes on non-uniform data leads to more accurate plaintext recovery than is suggested by the security theorems which only dealt with uniform inputs.

Doerner, Jack, Evans, David, shelat, abhi.  2016.  Secure Stable Matching at Scale. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :1602–1613.

When a group of individuals and organizations wish to compute a stable matching–-for example, when medical students are matched to medical residency programs–-they often outsource the computation to a trusted arbiter in order to preserve the privacy of participants' preferences. Secure multi-party computation offers the possibility of private matching processes that do not rely on any common trusted third party. However, stable matching algorithms have previously been considered infeasible for execution in a secure multi-party context on non-trivial inputs because they are computationally intensive and involve complex data-dependent memory access patterns. We adapt the classic Gale-Shapley algorithm for use in such a context, and show experimentally that our modifications yield a lower asymptotic complexity and more than an order of magnitude in practical cost improvement over previous techniques. Our main improvements stem from designing new oblivious data structures that exploit the properties of the matching algorithms. We apply a similar strategy to scale the Roth-Peranson instability chaining algorithm, currently in use by the National Resident Matching Program. The resulting protocol is efficient enough to be useful at the scale required for matching medical residents nationwide, taking just over 18 hours to complete an execution simulating the 2016 national resident match with more than 35,000 participants and 30,000 residency slots.

Naghmouchi, M. Yassine, Perrot, Nancy, Kheir, Nizar, Mahjoub, A. Ridha, Wary, Jean-Philippe.  2016.  A New Risk Assessment Framework Using Graph Theory for Complex ICT Systems. Proceedings of the 8th ACM CCS International Workshop on Managing Insider Security Threats. :97–100.

In this paper, we propose a new risk analysis framework that enables to supervise risks in complex and distributed systems. Our contribution is twofold. First, we provide the Risk Assessment Graphs (RAGs) as a model of risk analysis. This graph-based model is adaptable to the system changes over the time. We also introduce the potentiality and the accessibility functions which, during each time slot, evaluate respectively the chance of exploiting the RAG's nodes, and the connection time between these nodes. In addition, we provide a worst-case risk evaluation approach, based on the assumption that the intruder threats usually aim at maximising their benefits by inflicting the maximum damage to the target system (i.e. choosing the most likely paths in the RAG). We then introduce three security metrics: the propagated risk, the node risk and the global risk. We illustrate the use of our framework through the simple example of an enterprise email service. Our framework achieves both flexibility and generality requirements, it can be used to assess the external threats as well as the insider ones, and it applies to a wide set of applications.

Kumaresan, Ranjit, Vaikuntanathan, Vinod, Vasudevan, Prashant Nalini.  2016.  Improvements to Secure Computation with Penalties. Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security. :406–417.

Motivated by the impossibility of achieving fairness in secure computation [Cleve, STOC 1986], recent works study a model of fairness in which an adversarial party that aborts on receiving output is forced to pay a mutually predefined monetary penalty to every other party that did not receive the output. These works show how to design protocols for secure computation with penalties that tolerate an arbitrary number of corruptions. In this work, we improve the efficiency of protocols for secure computation with penalties in a hybrid model where parties have access to the "claim-or-refund" transaction functionality. Our first improvement is for the ladder protocol of Bentov and Kumaresan (Crypto 2014) where we improve the dependence of the script complexity of the protocol (which corresponds to miner verification load and also space on the blockchain) on the number of parties from quadratic to linear (and in particular, is completely independent of the underlying function). Our second improvement is for the see-saw protocol of Kumaresan et al. (CCS 2015) where we reduce the total number of claim-or-refund transactions and also the script complexity from quadratic to linear in the number of parties. We also present a 'dual-mode' protocol that offers different guarantees depending on the number of corrupt parties: (1) when s

Hibshi, Hanan.  2016.  Systematic Analysis of Qualitative Data in Security. Proceedings of the Symposium and Bootcamp on the Science of Security. :52–52.

This tutorial will introduce participants to Grounded Theory, which is a qualitative framework to discover new theory from an empirical analysis of data. This form of analysis is particularly useful when analyzing text, audio or video artifacts that lack structure, but contain rich descriptions. We will frame Grounded Theory in the context of qualitative methods and case studies, which complement quantitative methods, such as controlled experiments and simulations. We will contrast the approaches developed by Glaser and Strauss, and introduce coding theory - the most prominent qualitative method for performing analysis to discover Grounded Theory. Topics include coding frames, first- and second-cycle coding, and saturation. We will use examples from security interview scripts to teach participants: developing a coding frame, coding a source document to discover relationships in the data, developing heuristics to resolve ambiguities between codes, and performing second-cycle coding to discover relationships within categories. Then, participants will learn how to discover theory from coded data. Participants will further learn about inter-rater reliability statistics, including Cohen's and Fleiss' Kappa, Krippendorf's Alpha, and Vanbelle's Index. Finally, we will review how to present Grounded Theory results in publications, including how to describe the methodology, report observations, and describe threats to validity.

De Santis, Fabrizio, Bauer, Tobias, Sigl, Georg.  2016.  Hiding Higher-Order Univariate Leakages by Shuffling Polynomial Masking Schemes: A More Efficient, Shuffled, and Higher-Order Masked AES S-box. Proceedings of the 2016 ACM Workshop on Theory of Implementation Security. :17–26.

Polynomial masking is a glitch-resistant and higher-order masking scheme based upon Shamir's secret sharing scheme and multi-party computation protocols. Polynomial masking was first introduced at CHES 2011, while a 1st-order implementation of the AES S-box on FPGA was presented at CHES 2013. In this latter work, the authors showed a 2nd-order univariate leakage by side-channel collision analysis on a tuned measurement setup. This negative result motivates the need to evaluate the performance, area-costs, and security margins of combined \shuffled\ and higher-order polynomially masking schemes to counteract trivial univariate leakages. In this work, we provide the following contributions: first, we introduce additional principles for the selection of efficient addition chains, which allow for more compact and faster implementations of cryptographic S-boxes. Our 1st-order AES S-box implementation requires approximately 27% less registers, 20% less clock cycles, and 5% less random bits than the CHES 2013 implementation. Then, we propose a lightweight shuffling countermeasure, which inherently applies to polynomial masking schemes and effectively enhances their univariate security at negligible area expenses. Finally, we present the design of a \combined\ \shuffled\ \and\ higher-order polynomially masked AES S-box in hardware, while providing ASIC synthesis and side-channel analysis results in the Electro-Magnetic (EM) domain.

Smullen, Daniel, Breaux, Travis D..  2016.  Modeling, Analyzing, and Consistency Checking Privacy Requirements Using Eddy. Proceedings of the Symposium and Bootcamp on the Science of Security. :118–120.

Eddy is a privacy requirements specification language that privacy analysts can use to express requirements over data practices; to collect, use, transfer and retain personal and technical information. The language uses a simple SQL-like syntax to express whether an action is permitted or prohibited, and to restrict those statements to particular data subjects and purposes. Eddy also supports the ability to express modifications on data, including perturbation, data append, and redaction. The Eddy specifications are compiled into Description Logic to automatically detect conflicting requirements and to trace data flows within and across specifications. Conflicts are highlighted, showing which rules are in conflict (expressing prohibitions and rights to perform the same action on equivalent interpretations of the same data, data subjects, or purposes), and what definitions caused the rules to conflict. Each specification can describe an organization's data practices, or the data practices of specific components in a software architecture.