Biblio
Information Flow Control (IFC) is a collection of techniques for ensuring a no-write-down no-read-up style security policy known as noninterference. Traditional methods for both static (e.g. type systems) and dynamic (e.g. runtime monitors) IFC suffer from untenable numbers of false alarms on real-world programs. Secure Multi-Execution (SME) promises to provide secure information flow control without modifying the behaviour of already secure programs, a property commonly referred to as transparency. Implementations of SME exist for the web in the form of the FlowFox browser and as plug-ins to several programming languages. Furthermore, SME can in theory work in a black-box manner, meaning that it can be programming language agnostic, making it perfect for securing legacy or third-party systems. As such SME, and its variants like Multiple Facets (MF) and Faceted Secure Multi-Execution (FSME), appear to be a family of panaceas for the security engineer. The question is, how come, given all these advantages, that these techniques are not ubiquitous in practice? The answer lies, partially, in the issue of runtime and memory overhead. SME and its variants are prohibitively expensive to deploy in many non-trivial situations. The natural question is why is this the case? On the surface, the reason is simple. The techniques in the SME family all rely on the idea of multi-execution, running all or parts of a program multiple times to achieve noninterference. Naturally, this causes some overhead. However, the predominant thinking in the IFC community has been that these overheads can be overcome. In this paper we argue that there are fundamental reasons to expect this not to be the case and prove two key theorems: (1) All transparent enforcement is polynomial time equivalent to multi-execution. (2) All black-box enforcement takes time exponential in the number of principals in the security lattice. Our methods also allow us to answer, in the affirmative, an open question about the possibility of secure and transparent enforcement of a security condition known as Termination Insensitive Noninterference.
Protocols for securely testing the equality of two encrypted integers are common building blocks for a number of proposals in the literature that aim for privacy preservation. Being used repeatedly in many cryptographic protocols, designing efficient equality testing protocols is important in terms of computation and communication overhead. In this work, we consider a scenario with two parties where party A has two integers encrypted using an additively homomorphic scheme and party B has the decryption key. Party A would like to obtain an encrypted bit that shows whether the integers are equal or not but nothing more. We propose three secure equality testing protocols, which are more efficient in terms of communication, computation or both compared to the existing work. To support our claims, we present experimental results, which show that our protocols achieve up to 99% computation-wise improvement compared to the state-of-the-art protocols in a fair experimental set-up.
Data outsourcing to cloud has been a common IT practice nowadays due to its significant benefits. Meanwhile, security and privacy concerns are critical obstacles to hinder the further adoption of cloud. Although data encryption can mitigate the problem, it reduces the functionality of query processing, e.g., disabling SQL queries. Several schemes have been proposed to enable one-dimensional query on encrypted data, but multi-dimensional range query has not been well addressed. In this paper, we propose a secure and scalable scheme that can support multi-dimensional range queries over encrypted data. The proposed scheme has three salient features: (1) Privacy: the server cannot learn the contents of queries and data records during query processing. (2) Efficiency: we utilize hierarchical cubes to encode multi-dimensional data records and construct a secure tree index on top of such encoding to achieve sublinear query time. (3) Verifiability: our scheme allows users to verify the correctness and completeness of the query results to address server's malicious behaviors. We perform formal security analysis and comprehensive experimental evaluations. The results on real datasets demonstrate that our scheme achieves practical performance while guaranteeing data privacy and result integrity.
Secure Two Party Computation (2PC) has the potential to facilitate a wide range of real life applications where privacy of the computation and participants is critical. Nevertheless, this potential has not translated to widespread industry acceptance due to performance issues. Over the years a significant research effort has focused on optimising the performance of 2PC. The computation complexity has been continually improved and recently, following circuit optimisations and hardware support for cryptographic operations, evaluations of 2PC on a single host currently produce efficient results. Unfortunately, when evaluated on remote hosts, the performance remains prohibitive for practical purposes. The bottleneck is believed to be the bandwidth. In this work we explore the networking layer of 2PC implementations and show that the performance bottleneck is inherent in the usage of TCP sockets in implementations of 2PC schemes. Through experimental evaluations, we demonstrate that other transport protocols can significantly improve the performance of 2PC, making it suitable for practical applications.
A semi-analytical model for internal optical losses at high power in a 1.5 μm laser diode with strong n-doping in the n-side of the optical confinement layer is created. The model includes intervalence band absorption by holes supplied by both current flow and two-photon absorption. The resulting losses are shown to be substantially lower than those in a similar, but weakly doped structure. Thus a significant improvement in the output power and efficiency by strong n-doping is predicted.
In Wireless Sensor Networks (WSNs), data aggregation has been used to reduce bandwidth and energy costs during a data collection process. However, data aggregation, while bringing us the benefit of improving bandwidth usage and energy efficiency, also introduces opportunities for security attacks, thus reducing data delivery reliability. There is a trade-off between bandwidth and energy efficiency and achieving data delivery reliability. In this paper, we present a comparative study on the reliability and efficiency characteristics of different data aggregation approaches using both simulation studies and test bed evaluations. We also analyse the factors that contribute to network congestion and affect data delivery reliability. Finally, we investigate an optimal trade-off between reliability and efficiency properties of the different approaches by using an intermediate approach, called Multi-Aggregator based Multi-Cast (MAMC) data aggregation approach. Our evaluation results for MAMC show that it is possible to achieve reliability and efficiency at the same time.
While in business and private settings the disruptive impact of advanced information communication technology (ICT) have already been felt, the legal sector is now starting to face great disruptions due to such ICTs. Bits and pieces of innovations in the legal sector have been emerging for some time, affecting the performance of core functions and the legitimacy of public institutions. In this paper, we present our framework for enabling the smart government vision, particularly for the case of criminal justice systems, by unifying different isolated ICT-based solutions. Our framework, coined as Legal Logistics, supports the well-functioning of a legal system in order to streamline the innovations in these legal systems. The framework targets the exploitation of all relevant data generated by the ICT-based solutions. As will be illustrated for the Dutch criminal justice system, the framework may be used to integrate different ICT-based innovations and to gain insights about the well-functioning of the system. Furthermore, Legal Logistics can be regarded as a roadmap towards a smart and open justice.
Outsourcing a huge amount of local data to remote cloud servers that has been become a significant trend for industries. Leveraging the considerable cloud storage space, industries can also put forward the outsourced data to cloud computing. How to collect the data for computing without loss of privacy and confidentiality is one of the crucial security problems. Searchable encryption technique has been proposed to protect the confidentiality of the outsourced data and the privacy of the corresponding data query. This technique, however, only supporting search functionality, may not be fully applicable to real-world cloud computing scenario whereby secure data search, share as well as computation are needed. This work presents a novel encrypted cloud-based data share and search system without loss of user privacy and data confidentiality. The new system enables users to make conjunctive keyword query over encrypted data, but also allows encrypted data to be efficiently and multiply shared among different users without the need of the "download-decrypt-then-encrypt" mode. As of independent interest, our system provides secure keyword update, so that users can freely and securely update data's keyword field. It is worth mentioning that all the above functionalities do not incur any expansion of ciphertext size, namely, the size of ciphertext remains constant during being searched, shared and keyword-updated. The system is proven secure and meanwhile, the efficiency analysis shows its great potential in being used in large-scale database.
In this paper, we have mentioned a method to find the performance of projectwhich detects various web - attacks. The project is capable to identifying and preventing attacks like SQL Injection, Cross – Site Scripting, URL rewriting, Web server 400 error code etc. The performance of system is detected using the system attributes that are mentioned in this paper. This is also used to determine efficiency of the system.
In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs – this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(l s3.64), where s is the size of the Boolean formula to be obfuscated, and l s is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly l s, or only s when considering a keyed family of formulas, namely a class of functions of the form fz(x)=phi(z,x) where phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.
In large-scale systems, user authentication usually needs the assistance from a remote central authentication server via networks. The authentication service however could be slow or unavailable due to natural disasters or various cyber attacks on communication channels. This has raised serious concerns in systems which need robust authentication in emergency situations. The contribution of this paper is two-fold. In a slow connection situation, we present a secure generic multi-factor authentication protocol to speed up the whole authentication process. Compared with another generic protocol in the literature, the new proposal provides the same function with significant improvements in computation and communication. Another authentication mechanism, which we name stand-alone authentication, can authenticate users when the connection to the central server is down. We investigate several issues in stand-alone authentication and show how to add it on multi-factor authentication protocols in an efficient and generic way.