Biblio
The Elliptic Curve Cryptosystems(ECC) are proved to be the cryptosystem of future generation because of its smaller key size and uncompromised security. It is well suited for applications running in resource-restricted devices such as smart cards. At present, there is no efficient algorithm or known sub-exponential algorithm to break ECC theoretically. However, a hardware implementation of ECC leaks secret key information due to power analysis attacks particularly differential power analysis attack(DPA). These attacks break the system with far less effort when compared to all other attacks based on algebraic weaknesses of the algorithms. There are many solutions to overcome the power analysis attack, but all the available solutions have their own advantages and disadvantages by compromising either its security or performance. In this paper, we present a secure and efficient algorithm to solve the elliptic curve scalar multiplication(ECSM) using initial points randomization and by delaying the point addition operation. The implementation results and performance analysis shows that the proposed algorithm is efficient and secure against power analysis attacks.
With the growth of cloud computing, database outsourcing has attracted much interests. Due to the serious privacy threats in cloud computing, databases needs to be encrypted before being outsourced to the cloud. Therefore, various Top-k query processing algorithms have been studied for encrypted databases. However, existing algorithms are either insecure or inefficient. Therefore, in this paper we propose an efficient and secure Top-k query processing algorithm. Our algorithm guarantees the confidentiality of both the data and a user query while hiding data access patterns. Our algorithm also enables the query issuer not to participate in the query processing. To achieve a high level of query processing efficiency, we use new secure protocols using Yao's garbled circuit and a data packing technique. A performance analysis shows that the proposed algorithm outperforms the existing works in terms of query processing costs.
A MANET is a collection of self-configured node connected with wireless links. Each node of a mobile ad hoc network acts as a router and finds out a suitable route to forward a packet from source to destination. This network is applicable in areas where establishment of infrastructure is not possible, such as in the military environment. Along with the military environment MANET is also used in civilian environment such as sports stadiums, meeting room. The routing functionality of each node is cause of many security threats on routing. In this paper addressed the problem of identifying and isolating wormhole attack that refuse to forward packets in wireless mobile ad hoc network. The impact of this attack has been shown to be detrimental to network performance, lowering the packet delivery ratio and dramatically increasing the end-to-end delay. Proposed work suggested the efficient and secure routing in MANET. Using this approach of buffer length and RTT calculation, routing overhead minimizes. This research is based on detection and prevention of wormhole attacks in AODV. The proposed protocol is simulated using NS-2 and its performance is compared with the standard AODV protocol. The statistical analysis shows that modified AODV protocol detects wormhole attack efficiently and provides secure and optimum path for routing.
This paper introduces a design and implementation of a security scheme for the Internet of Things (IoT) based on ECQV Implicit Certificates and Datagram Transport Layer Security (DTLS) protocol. In this proposed security scheme, Elliptic curve cryptography based ECQV implicit certificate plays a key role allowing mutual authentication and key establishment between two resource-constrained IoT devices. We present how IoT devices get ECQV implicit certificates and use them for authenticated key exchange in DTLS. An evaluation of execution time of the implementation is also conducted to assess the efficiency of the solution.
Non-malleability is an important and intensively studied security notion for many cryptographic primitives. In the context of public key encryption, this notion means it is infeasible for an adversary to transform an encryption of some message m into one of a related message m' under the given public key. Although it has provided a strong security property for many applications, it still does not suffice for some scenarios like the system where the users could issue keys on-the-fly. In such settings, the adversary may have the power to transform the given public key and the ciphertext. To withstand such attacks, Fischlin introduced a stronger notion, known as complete non-malleability, which requires that the non-malleability property be preserved even for the adversaries attempting to produce a ciphertext of some related message under the transformed public key. To date, many schemes satisfying this stronger security have been proposed, but they are either inefficient or proved secure in the random oracle model. In this work, we put forward a new encryption scheme in the common reference string model. Based on the standard DBDH assumption, the proposed scheme is proved completely non-malleable secure against adaptive chosen ciphertext attacks in the standard model. In our scheme, the well-formed public keys and ciphertexts could be publicly recognized without drawing support from unwieldy techniques like non-interactive zero knowledge proofs or one-time signatures, thus achieving a better performance.
The emergence and wide availability of remote storage service providers prompted work in the security community that allows clients to verify integrity and availability of the data that they outsourced to a not fully trusted remote storage server at a relatively low cost. Most recent solutions to this problem allow clients to read and update (i.e., insert, modify, or delete) stored data blocks while trying to lower the overhead associated with verifying the integrity of the stored data. In this work, we develop a novel scheme, performance of which favorably compares with the existing solutions. Our solution additionally enjoys a number of new features, such as a natural support for operations on ranges of blocks, revision control, and support for multiple user access to shared content. The performance guarantees that we achieve stem from a novel data structure called a balanced update tree and removing the need for interaction during update operations in addition to communicating the updates themselves.
Mobile Ad hoc NETworks (MANETs) is a collection of mobile nodes and they can communicate with each other over the wireless medium without any fixed infrastructure. In MANETs any node can join and leave the network at any time and this makes MANETs vulnerable to a malicious attackers. Hence, it is necessary to develop an efficient intrusion-detection system to safeguard the MANET from attacks. In this paper, an Enhanced Adaptive Acknowledgement with Digital Signature Algorithm namely (EAACK-DSA) has been proposed which can detect and isolate the malicious nodes. This algorithm is based on the acknowledgement packet and hence all acknowledgement packets are digitally signed before transmission. The proposed algorithm can be integrated with any source routing protocol and EAACK-DSA gives a better malicious-behavior-detection than the conventional approaches.
The malicious JavaScript is a common springboard for attackers to launch several types of network attacks, such as Drive-by-Download and malicious PDF delivery attack. In order to elude detection of signature matching, malicious JavaScript is often packed (so-called "obfuscation") with diversified algorithms therefore the occurrence of obfuscation is always a good pointer for potential maliciousness. In this investigation, we propose a light weight approach for quickly filtering obfuscated JavaScript by a novel method of tokenizing JavaScript text at letter level and information-theoretic measures, based on the previous work in the domain of detecting obfuscated malicious code as well as the pattern analysis of natural languages. The new approach is apparently time efficient compared to existing systems since it processes much less objects while it is also proved to be able to reach the acceptable detection accuracies.
Software defined networking (SDN) is an emerging technology for controlling flows through networks. Used in the context of industrial control systems, an objective is to design configurations that have built-in protection for hardware failures in the sense that the configuration has "baked-in" back-up routes. The objective is to leave the configuration static as long as possible, minimizing the need to have the controller push in new routing and filtering rules We have designed and implemented a tool that enables us to determine the complete connectivity map from an analysis of all switch configurations in the network. We can use this tool to explore the impact of a link failure, in particular to determine whether the failure induces loss of the ability to deliver a flow even after the built-in back-up routes are used. A measure of the original configuration's resilience to link failure is the mean number of link failures required to induce the first such loss of service. The computational cost of each link failure and subsequent analysis is large, so there is much to be gained by reducing the overall cost of obtaining a statistically valid estimate of resiliency. This paper shows that when analysis of a network state can identify all as-yet-unfailed links any one of whose failure would induce loss of a flow, then we can use the technique of importance sampling to estimate the mean number of links required to fail before some flow is lost, and analyze the potential for reducing the variance of the sample statistic. We provide both theoretical and empirical evidence for significant variance reduction.
Outsourcing a huge amount of local data to remote cloud servers that has been become a significant trend for industries. Leveraging the considerable cloud storage space, industries can also put forward the outsourced data to cloud computing. How to collect the data for computing without loss of privacy and confidentiality is one of the crucial security problems. Searchable encryption technique has been proposed to protect the confidentiality of the outsourced data and the privacy of the corresponding data query. This technique, however, only supporting search functionality, may not be fully applicable to real-world cloud computing scenario whereby secure data search, share as well as computation are needed. This work presents a novel encrypted cloud-based data share and search system without loss of user privacy and data confidentiality. The new system enables users to make conjunctive keyword query over encrypted data, but also allows encrypted data to be efficiently and multiply shared among different users without the need of the "download-decrypt-then-encrypt" mode. As of independent interest, our system provides secure keyword update, so that users can freely and securely update data's keyword field. It is worth mentioning that all the above functionalities do not incur any expansion of ciphertext size, namely, the size of ciphertext remains constant during being searched, shared and keyword-updated. The system is proven secure and meanwhile, the efficiency analysis shows its great potential in being used in large-scale database.
Resource discovery in unstructured peer-to-peer networks causes a search query to be flooded throughout the network via random nodes, leading to security and privacy issues. The owner of the search query does not have control over the transmission of its query through the network. Although algorithms have been proposed for policy-compliant query or data routing in a network, these algorithms mainly deal with authentic route computation and do not provide mechanisms to actually verify the network paths taken by the query. In this work, we propose an approach to deal with the problem of verifying network paths taken by a search query during resource discovery, and detection of malicious forwarding of search query. Our approach aims at being secure and yet very scalable, even in the presence of huge number of nodes in the network.
Recommendation systems become popular in our daily life. It is well known that the more the release of users' personal data, the better the quality of recommendation. However, such services raise serious privacy concerns for users. In this paper, focusing on matrix factorization-based recommendation systems, we propose the first privacy-preserving matrix factorization using fully homomorphic encryption. On inputs of encrypted users' ratings, our protocol performs matrix factorization over the encrypted data and returns encrypted outputs so that the recommendation system knows nothing on rating values and resulting user/item profiles. It provides a way to obfuscate the number and list of items a user rated without harming the accuracy of recommendation, and additionally protects recommender's tuning parameters for business benefit and allows the recommender to optimize the parameters for quality of service. To overcome performance degradation caused by the use of fully homomorphic encryption, we introduce a novel data structure to perform computations over encrypted vectors, which are essential operations for matrix factorization, through secure 2-party computation in part. With the data structure, the proposed protocol requires dozens of times less computation cost over those of previous works. Our experiments on a personal computer with 3.4 GHz 6-cores 64 GB RAM show that the proposed protocol runs in 1.5 minutes per iteration. It is more efficient than Nikolaenko et al.'s work proposed in CCS 2013, in which it took about 170 minutes on two servers with 1.9 GHz 16-cores 128 GB RAM.
The massive amount of data that is being collected by today's society has the potential to advance scientific knowledge and boost innovations. However, people often lack sufficient computing resources to analyze their large-scale data in a cost-effective and timely way. Cloud computing offers access to vast computing resources on an on-demand and pay-per-use basis, which is a practical way for people to analyze their huge data sets. However, since their data contain sensitive information that needs to be kept secret for ethical, security, or legal reasons, many people are reluctant to adopt cloud computing. For the first time in the literature, we propose a secure outsourcing algorithm for large-scale quadratic programs (QPs), which is one of the most fundamental problems in data analysis. Specifically, based on simple linear algebra operations, we design a low-complexity QP transformation that protects the private data in a QP. We show that the transformed QP is computationally indistinguishable under a chosen plaintext attack (CPA), i.e., CPA-secure. We then develop a parallel algorithm to solve the transformed QP at the cloud, and efficiently find the solution to the original QP at the user. We implement the proposed algorithm on the Amazon Elastic Compute Cloud (EC2) and a laptop. We find that our proposed algorithm offers significant time savings for the user and is scalable to the size of the QP.
In data outsourcing, a client stores a large amount of data on an untrusted server; subsequently, the client can request the server to compute a function on any subset of the data. This setting naturally leads to two security requirements: confidentiality of input data, and authenticity of computations. Existing approaches that satisfy both requirements simultaneously are built on fully homomorphic encryption, which involves expensive computation on the server and client and hence is impractical. In this paper, we propose two verifiable homomorphic encryption schemes that do not rely on fully homomorphic encryption. The first is a simple and efficient scheme for linear functions. The second scheme supports the class of multivariate quadratic functions, by combining the Paillier cryptosystem with a new homomorphic message authentication code (MAC) scheme. Through formal security analysis, we show that the schemes are semantically secure and unforgeable.
Cloud storage has been gaining in popularity as an on-line service for archiving, backup, and even primary storage of files. However, due to the data outsourcing, cloud storage also introduces new security challenges, which require a data audit and data repair service to ensure data availability and data integrity in the cloud. In this paper, we present the design and implementation of a network-coding-based Proof Of Retrievability scheme called ELAR, which achieves a lightweight data auditing and data repairing. In particular, we support direct repair mechanism in which the client can be free from the data repair process. Simultaneously, we also support the task of allowing a third party auditor (TPA), on behalf of the client, to verify the availability and integrity of the data stored in the cloud servers without the need of an asymmetric-key setting. The client is thus also free from the data audit process. TPA uses spot-checking which is a very efficient probabilistic method for checking a large amount of data. Extensive security and performance analysis show that the proposed scheme is highly efficient and provably secure.
The microelectronics industry seeks screening tools that can be used to verify the origin of and track integrated circuits (ICs) throughout their lifecycle. Embedded circuits that measure process variation of an IC are well known. This paper adds to previous work using these circuits for studying manufacturer characteristics on final product ICs, particularly for the purpose of developing and verifying a signature for a microelectronics manufacturing facility (fab). We present the design, measurements and analysis of 159 silicon ICs which were built as a proof of concept for this purpose. 80 copies of our proof of concept IC were built at one fab, and 80 more copies were built across two lots at a second fab. Using these ICs, our prototype circuits allowed us to distinguish these two fabs with up to 98.7% accuracy and also distinguish the two lots from the second fab with up to 98.8% accuracy.
The microelectronics industry seeks screening tools that can be used to verify the origin of and track integrated circuits (ICs) throughout their lifecycle. Embedded circuits that measure process variation of an IC are well known. This paper adds to previous work using these circuits for studying manufacturer characteristics on final product ICs, particularly for the purpose of developing and verifying a signature for a microelectronics manufacturing facility (fab). We present the design, measurements and analysis of 159 silicon ICs which were built as a proof of concept for this purpose. 80 copies of our proof of concept IC were built at one fab, and 80 more copies were built across two lots at a second fab. Using these ICs, our prototype circuits allowed us to distinguish these two fabs with up to 98.7% accuracy and also distinguish the two lots from the second fab with up to 98.8% accuracy.
The output of 3D volume segmentation is crucial to a wide range of endeavors. Producing accurate segmentations often proves to be both inefficient and challenging, in part due to lack of imaging data quality (contrast and resolution), and because of ambiguity in the data that can only be resolved with higher-level knowledge of the structure and the context wherein it resides. Automatic and semi-automatic approaches are improving, but in many cases still fail or require substantial manual clean-up or intervention. Expert manual segmentation and review is therefore still the gold standard for many applications. Unfortunately, existing tools (both custom-made and commercial) are often designed based on the underlying algorithm, not the best method for expressing higher-level intention. Our goal is to analyze manual (or semi-automatic) segmentation to gain a better understanding of both low-level (perceptual tasks and actions) and high-level decision making. This can be used to produce segmentation tools that are more accurate, efficient, and easier to use. Questioning or observation alone is insufficient to capture this information, so we utilize a hybrid capture protocol that blends observation, surveys, and eye tracking. We then developed, and validated, data coding schemes capable of discerning low-level actions and overall task structures.
One of the most challenging issues facing academic conferences and educational institutions today is plagiarism detection. Typically, these entities wish to ensure that the work products submitted to them have not been plagiarized from another source (e.g., authors submitting identical papers to multiple journals). Assembling large centralized databases of documents dramatically improves the effectiveness of plagiarism detection techniques, but introduces a number of privacy and legal issues: all document contents must be completely revealed to the database operator, making it an attractive target for abuse or attack. Moreover, this content aggregation involves the disclosure of potentially sensitive private content, and in some cases this disclosure may be prohibited by law. In this work, we introduce Elxa, the first scalable centralized plagiarism detection system that protects the privacy of the submissions. Elxa incorporates techniques from the current state of the art in plagiarism detection, as evaluated by the information retrieval community. Our system is designed to be operated on existing cloud computing infrastructure, and to provide incentives for the untrusted database operator to maintain the availability of the network. Elxa can be used to detect plagiarism in student work, duplicate paper submissions (and their associated peer reviews), similarities between confidential reports (e.g., malware summaries), or any approximate text reuse within a network of private documents. We implement a prototype using the Hadoop MapReduce framework, and demonstrate that it is feasible to achieve competitive detection effectiveness in the private setting.