Biblio
Energy use of buildings represents roughly 40% of the overall energy consumption. Most of the national agendas contain goals related to reducing the energy consumption and carbon footprint. Timely and accurate fault detection and diagnosis (FDD) in building management systems (BMS) have the potential to reduce energy consumption cost by approximately 15-30%. Most of the FDD methods are data-based, meaning that their performance is tightly linked to the quality and availability of relevant data. Based on our experience, faults and relevant events data is very sparse and inadequate, mostly because of the lack of will and incentive for those that would need to keep track of faults. In this paper we introduce the idea of using crowdsourcing to support FDD data collection processes, and illustrate our idea through a mobile application that has been implemented for this purpose. Furthermore, we propose a strategy of how to successfully deploy this building occupants' crowdsourcing application.
In order to realize the accurate positioning and recognition effectively of the analog circuit, the feature extraction of fault information is an extremely important port. This arrival based on the experimental circuit which is designed as a failure mode to pick-up the fault sample set. We have chosen two methods, one is the combination of wavelet transform and principal component analysis, the other is the factorial analysis for the fault data's feature extraction, and we also use the extreme learning machine to train and diagnose the data, to compare the performance of these two methods through the accuracy of the diagnosis. The results of the experiment shows that the data which we get from the experimental circuit, after dealing with these two methods can quickly get the fault location.
This paper considers the problem of system-level fault diagnosis in highly dynamic networks. The existing fault diagnostic models deal mainly with static faults and have limited capabilities to handle dynamic networks. These fault diagnostic models are based on timers that work on a simple timeout mechanism to identify the node status, and often make simplistic assumptions for system implementations. To overcome the above problems, we propose a time-free comparison-based diagnostic model. Unlike the traditional models, the proposed model does not rely on timers and is more suitable for use in dynamic network environments. We also develop a novel comparison-based fault diagnosis protocol for identifying and diagnosing dynamic faults. The performance of the protocol has been analyzed and its correctness has been proved.
Fault diagnosis in IT environments is complicated because (i) most monitors have shared specificity (high amount of memory utilization can result from a large number of causes), (ii) it is hard to deploy and maintain enough sensors to ensure adequate coverage, and (iii) some functionality may be provided as-a-service by external parties with limited visibility and simultaneous availability of alert data. To systematically incorporate uncertainty and to be able to fuse information from multiple sources, we propose the use of Dempster-Shafer Theory (DST) of evidential reasoning for fault diagnosis and show its efficacy in the context of a distributed application.
Self-describing key-value data formats such as JSON are becoming increasingly popular as application developers choose to avoid the rigidity imposed by the relational model. Database systems designed for these self-describing formats, such as MongoDB, encourage users to use denormalized, heavily nested data models so that relationships across records and other schema information need not be predefined or standardized. Such data models contribute to long-term development complexity, as their lack of explicit entity and relationship tracking burdens new developers unfamiliar with the dataset. Furthermore, the large amount of data repetition present in such data layouts can introduce update anomalies and poor scan performance, which reduce both the quality and performance of analytics over the data. In this paper we present an algorithm that automatically transforms the denormalized, nested data commonly found in NoSQL systems into traditional relational data that can be stored in a standard RDBMS. This process includes a schema generation algorithm that discovers relationships across the attributes of the denormalized datasets in order to organize those attributes into relational tables. It further includes a matching algorithm that discovers sets of attributes that represent overlapping entities and merges those sets together. These algorithms reduce data repetition, allow the use of data analysis tools targeted at relational data, accelerate scan-intensive algorithms over the data, and help users gain a semantic understanding of complex, nested datasets.
With data becoming available in larger quantities and at higher rates, new data processing paradigms have been proposed to handle high-volume, fast-moving data. Data Stream Processing is one such paradigm wherein transient data streams flow through sets of continuous queries, only returning results when data is of interest to the querier. To avoid the large costs associated with maintaining the infrastructure required for processing these data streams, many companies will outsource their computation to third-party cloud services. This outsourcing, however, can lead to private data being accessed by parties that a data provider may not trust. The literature offers solutions to this confidentiality and access control problem but they have fallen short of providing a complete solution to these problems, due to either immense overheads or trust requirements placed on these third-party services. To address these issues, we have developed PolyStream, an enhancement to existing data stream management systems that enables data providers to specify attribute-based access control policies that are cryptographically enforced while simultaneously allowing many types of in-network data processing. We detail the access control models and mechanisms used by PolyStream, and describe a novel use of security punctuations that enables flexible, online policy management and key distribution. We detail how queries are submitted and executed using an unmodified Data Stream Management System, and show through an extensive evaluation that PolyStream yields a 550x performance gain versus the state-of-the-art system StreamForce in CODASPY 2014, while providing greater functionality to the querier.
The World Wide Web has become the most common platform for building applications and delivering content. Yet despite years of research, the web continues to face severe security challenges related to data integrity and confidentiality. Rather than continuing the exploit-and-patch cycle, we propose addressing these challenges at an architectural level, by supplementing the web's existing connection-based and server-based security models with a new approach: content-based security. With this approach, content is directly signed and encrypted at rest, enabling it to be delivered via any path and then validated by the browser. We explore how this new architectural approach can be applied to the web and analyze its security benefits. We then discuss a broad research agenda to realize this vision and the challenges that must be overcome.
This paper introduces a design and implementation of a security scheme for the Internet of Things (IoT) based on ECQV Implicit Certificates and Datagram Transport Layer Security (DTLS) protocol. In this proposed security scheme, Elliptic curve cryptography based ECQV implicit certificate plays a key role allowing mutual authentication and key establishment between two resource-constrained IoT devices. We present how IoT devices get ECQV implicit certificates and use them for authenticated key exchange in DTLS. An evaluation of execution time of the implementation is also conducted to assess the efficiency of the solution.
We study a sensor network setting in which samples are encrypted individually using different keys and maintained on a cloud storage. For large systems, e.g. those that generate several millions of samples per day, fine-grained sharing of encrypted samples is challenging. Existing solutions, such as Attribute-Based Encryption (ABE) and Key Aggregation Cryptosystem (KAC), can be utilized to address the challenge, but only to a certain extent. They are often computationally expensive and thus unlikely to operate at scale. We propose an algorithmic enhancement and two heuristics to improve KAC's key reconstruction cost, while preserving its provable security. The improvement is particularly significant for range and down-sampling queries – accelerating the reconstruction cost from quadratic to linear running time. Experimental study shows that for queries of size 32k samples, the proposed fast reconstruction techniques speed-up the original KAC by at least 90 times on range and down-sampling queries, and by eight times on general (arbitrary) queries. It also shows that at the expense of splitting the query into 16 sub-queries and correspondingly issuing that number of different aggregated keys, reconstruction time can be reduced by 19 times. As such, the proposed techniques make KAC more applicable in practical scenarios such as sensor networks or the Internet of Things.
The semantics of online authentication in the web are rather straightforward: if Alice has a certificate binding Bob's name to a public key, and if a remote entity can prove knowledge of Bob's private key, then (barring key compromise) that remote entity must be Bob. However, in reality, many websites' and the majority of the most popular ones-are hosted at least in part by third parties such as Content Delivery Networks (CDNs) or web hosting providers. Put simply: administrators of websites who deal with (extremely) sensitive user data are giving their private keys to third parties. Importantly, this sharing of keys is undetectable by most users, and widely unknown even among researchers. In this paper, we perform a large-scale measurement study of key sharing in today's web. We analyze the prevalence with which websites trust third-party hosting providers with their secret keys, as well as the impact that this trust has on responsible key management practices, such as revocation. Our results reveal that key sharing is extremely common, with a small handful of hosting providers having keys from the majority of the most popular websites. We also find that hosting providers often manage their customers' keys, and that they tend to react more slowly yet more thoroughly to compromised or potentially compromised keys.
Security threats may hinder the large scale adoption of the emerging Internet of Things (IoT) technologies. Besides efforts have already been made in the direction of data integrity preservation, confidentiality and privacy, several issues are still open. The existing solutions are mainly based on encryption techniques, but no attention is actually paid to key management. A clever key distribution system, along with a key replacement mechanism, are essentials for assuring a secure approach. In this paper, two popular key management systems, conceived for wireless sensor networks, are integrated in a real IoT middleware and compared in order to evaluate their performance in terms of overhead, delay and robustness towards malicious attacks.
The storage and management of secrets (encryption keys, passwords, etc) are significant open problems in the age of ephemeral, cloud-based computing infrastructure. How do we store and control access to the secrets necessary to configure and operate a range of modern technologies without sacrificing security and privacy requirements or significantly curtailing the desirable capabilities of our systems? To answer this question, we propose Tutamen: a next-generation secret-storage service. Tutamen offers a number of desirable properties not present in existing secret-storage solutions. These include the ability to operate across administrative domain boundaries and atop minimally trusted infrastructure. Tutamen also supports access control based on contextual, multi-factor, and alternate-band authentication parameters. These properties have allowed us to leverage Tutamen to support a variety of use cases not easily realizable using existing systems, including supporting full-disk encryption on headless servers and providing fully-featured client-side encryption for cloud-based file-storage services. In this paper, we present an overview of the secret-storage challenge, Tutamen's design and architecture, the implementation of our Tutamen prototype, and several of the applications we have built atop Tutamen. We conclude that Tutamen effectively eases the secret-storage burden and allows developers and systems administrators to achieve previously unattainable security-oriented goals while still supporting a wide range of feature-oriented requirements.
For authenticating time critical broadcast messages, IEEE 1609.2 security standard for Vehicular Ad hoc Networks (VANETs) suggests the use of secure Elliptic Curve Digital Signature Algorithm (ECDSA). Since ECDSA has an expensive verification in terms of time, most commonly suggested alternate algorithms are TESLA and signature amortization. Unfortunately, these algorithms lack immediate authentication and non-repudiation. Therefore, we introduce a probabilistic verification scheme for an ECDSA-based authentication protocol. Using ns2 simulation tools, we compare the performance of all above-mentioned broadcast authentication algorithms. The results show with our proposed scheme, there is an increase in packet processed ratio over that of all the other algorithms.
To ensure the authenticity and integrity, data are traditionally signed by digital signatures, which will be invalidated by any processing of the data. With the vast amount of data generated every day, it is however desirable to allow flexible processing of the signed data via applying computations or functions on them, without losing the authenticity. Signatures can also serve as credentials for access control, which appears in many aspects of life, ranging from unlocking security gates of buildings, to virtual access of data by computer programs. With the prolific use of Internet-of-Things (IoT), everything is getting connected together. There is an emerging need for more versatile credentials to secure new application scenarios, for instance, assigning different credentials to different devices, such that they can authenticate and cooperate with each other to jointly perform some computation tasks. To realize the above, we envision a general framework called functional credentials. Functional credentials allow multiple entities to (jointly) issue, combine, delegate, present, verify, escrow, and decrypt different forms of credentials, by operating on the associated "cryptographic objects" including secret keys, attributes, ciphertexts, and auxiliary data (e.g., pseudonym, expiry date, or policies for combination / delegation / revocation). Instantiating this framework with different functions can provide a spectrum of solutions for securing IoT. This talk covers both the practical applications and theoretic foundations. I will first motivate the versatility of functional credentials by case studies on IoT, which identify the need of new credential systems. I will then formulate the definition of functional credentials. Finally, I will share some initial ideas in realizing functional credentials, and discuss the obstacles ahead.
Digital signatures are perhaps the most important base for authentication and trust relationships in large scale systems. More specifically, various applications of signatures provide privacy and anonymity preserving mechanisms and protocols, and these, in turn, are becoming critical (due to the recently recognized need to protect individuals according to national rules and regulations). A specific type of signatures called "signatures with efficient protocols", as introduced by Camenisch and Lysyanskaya (CL), efficiently accommodates various basic protocols and extensions like zero-knowledge proofs, signing committed messages, or re-randomizability. These are, in fact, typical operations associated with signatures used in typical anonymity and privacy-preserving scenarios. To date there are no "signatures with efficient protocols" which are based on simple assumptions and truly practical. These two properties assure us a robust primitive: First, simple assumptions are needed for ensuring that this basic primitive is mathematically robust and does not require special ad hoc assumptions that are more risky, imply less efficiency, are more tuned to the protocol itself, and are perhaps less trusted. In the other dimension, efficiency is a must given the anonymity applications of the protocol, since without proper level of efficiency the future adoption of the primitives is always questionable (in spite of their need). In this work, we present a new CL-type signature scheme that is re-randomizable under a simple, well-studied, and by now standard, assumption (SXDH). The signature is efficient (built on the recent QA-NIZK constructions), and is, by design, suitable to work in extended contexts that typify privacy settings (like anonymous credentials, group signature, and offline e-cash). We demonstrate its power by presenting practical protocols based on it.
A nonrepudiation protocol from party S to party R performs two tasks. First, the protocol enables party S to send to party R some text x along with a proof (that can convince a judge) that x was indeed sent by S. Second, the protocol enables party R to receive text x from S and to send to S a proof (that can convince a judge) that x was indeed received by R. A nonrepudiation protocol from one party to another is called two-phase iff the two parties execute the protocol as specified until one of the two parties receives its complete proof. Then and only then does this party refrain from sending any message specified by the protocol because these messages only help the other party complete its proof. In this paper, we present methods for specifying and verifying two-phase nonrepudiation protocols.
A nonrepudiation protocol from a sender S to a set of potential receivers \R1, R2, ..., Rn\ performs two functions. First, this protocol enables S to send to every potential receiver Ri a copy of file F along with a proof that can convince an unbiased judge that F was indeed sent by S to Ri. Second, this protocol also enables each Ri to receive from S a copy of file F and to send back to S a proof that can convince an unbiased judge that F was indeed received by Ri from S. When a nonrepudiation protocol from S to \R1, R2, ..., Rn\ is implemented in a cloud system, the communications between S and the set of potential receivers \R1, R2, ..., Rn\ are not carried out directly. Rather, these communications are carried out through a cloud C. In this paper, we present a nonrepudiation protocol that is implemented in a cloud system and show that this protocol is correct. We also show that this protocol has two clear advantages over nonrepudiation protocols that are not implemented in cloud systems.
Non-malleability is an important and intensively studied security notion for many cryptographic primitives. In the context of public key encryption, this notion means it is infeasible for an adversary to transform an encryption of some message m into one of a related message m' under the given public key. Although it has provided a strong security property for many applications, it still does not suffice for some scenarios like the system where the users could issue keys on-the-fly. In such settings, the adversary may have the power to transform the given public key and the ciphertext. To withstand such attacks, Fischlin introduced a stronger notion, known as complete non-malleability, which requires that the non-malleability property be preserved even for the adversaries attempting to produce a ciphertext of some related message under the transformed public key. To date, many schemes satisfying this stronger security have been proposed, but they are either inefficient or proved secure in the random oracle model. In this work, we put forward a new encryption scheme in the common reference string model. Based on the standard DBDH assumption, the proposed scheme is proved completely non-malleable secure against adaptive chosen ciphertext attacks in the standard model. In our scheme, the well-formed public keys and ciphertexts could be publicly recognized without drawing support from unwieldy techniques like non-interactive zero knowledge proofs or one-time signatures, thus achieving a better performance.
Large scale biomedical research projects involve analysis of huge amount of genomic data which is owned by different data owners. The collection and storing of genomic data is sometimes beyond the capability of a sole organization. Genomic data sharing is a feasible solution to overcome this problem. These scenarios can be generalized into the problem of aggregating data distributed among multiple databases and owned by different data owners. However, we should guarantee that an adversary cannot learn anything about the data or the individual contribution of each party towards the final output of the computation. In this paper, we propose a practical solution for secure sharing and computation of genomic data. We adopt the Paillier cryptosystem and the order preserving encryption to securely execute the count query and the ranked query. Experimental results demonstrate that the computation time is realistic enough to make our system adoptable in the real world.
In data outsourcing, a client stores a large amount of data on an untrusted server; subsequently, the client can request the server to compute a function on any subset of the data. This setting naturally leads to two security requirements: confidentiality of input data, and authenticity of computations. Existing approaches that satisfy both requirements simultaneously are built on fully homomorphic encryption, which involves expensive computation on the server and client and hence is impractical. In this paper, we propose two verifiable homomorphic encryption schemes that do not rely on fully homomorphic encryption. The first is a simple and efficient scheme for linear functions. The second scheme supports the class of multivariate quadratic functions, by combining the Paillier cryptosystem with a new homomorphic message authentication code (MAC) scheme. Through formal security analysis, we show that the schemes are semantically secure and unforgeable.
In the last few years, there has been significant interest in developing methods to search over encrypted data. In the case of range queries, a simple solution is to encrypt the contents of the database using an order-preserving encryption (OPE) scheme (i.e., an encryption scheme that supports comparisons over encrypted values). However, Naveed et al. (CCS 2015) recently showed that OPE-encrypted databases are extremely vulnerable to "inference attacks."
In this work, we consider a related primitive called order-revealing encryption (ORE), which is a generalization of OPE that allows for stronger security. We begin by constructing a new ORE scheme for small message spaces which achieves the "best-possible" notion of security for ORE. Next, we introduce a "domain extension" technique and apply it to our small-message-space ORE. While our domain-extension technique does incur a loss in security, the resulting ORE scheme we obtain is more secure than all existing (stateless and non-interactive) OPE and ORE schemes which are practical. All of our constructions rely only on symmetric primitives. As part of our analysis, we also give a tight lower bound for OPE and show that no efficient OPE scheme can satisfy best-possible security if the message space contains just three messages. Thus, achieving strong notions of security for even small message spaces requires moving beyond OPE.
Finally, we examine the properties of our new ORE scheme and show how to use it to construct an efficient range query protocol that is robust against the inference attacks of Naveed et al. We also give a full implementation of our new ORE scheme, and show that not only is our scheme more secure than existing OPE schemes, it is also faster: encrypting a 32-bit integer requires just 55 microseconds, which is more than 65 times faster than existing OPE schemes.
In cloud computing, computationally weak users are always willing to outsource costly computations to a cloud, and at the same time they need to check the correctness of the result provided by the cloud. Such activities motivate the occurrence of verifiable computation (VC). Recently, Parno, Raykova and Vaikuntanathan showed any VC protocol can be constructed from an attribute-based encryption (ABE) scheme for a same class of functions. In this paper, we propose two practical and efficient semi-adaptively secure key-policy attribute-based encryption (KP-ABE) schemes with constant-size ciphertexts. The semi-adaptive security requires that the adversary designates the challenge attribute set after it receives public parameters but before it issues any secret key query, which is stronger than selective security guarantee. Our first construction deals with small universe while the second one supports large universe. Both constructions employ the technique underlying the prime-order instantiation of nested dual system groups, which are based on the \$d\$-linear assumption including SXDH and DLIN assumptions. In order to evaluate the performance, we implement our ABE schemes using \$\textbackslashtextsf\Python\\$ language in Charm. Compared with previous KP-ABE schemes with constant-size ciphertexts, our constructions achieve shorter ciphertext and secret key sizes, and require low computation costs, especially under the SXDH assumption.
With the growth of cloud computing, database outsourcing has attracted much interests. Due to the serious privacy threats in cloud computing, databases needs to be encrypted before being outsourced to the cloud. Therefore, various Top-k query processing algorithms have been studied for encrypted databases. However, existing algorithms are either insecure or inefficient. Therefore, in this paper we propose an efficient and secure Top-k query processing algorithm. Our algorithm guarantees the confidentiality of both the data and a user query while hiding data access patterns. Our algorithm also enables the query issuer not to participate in the query processing. To achieve a high level of query processing efficiency, we use new secure protocols using Yao's garbled circuit and a data packing technique. A performance analysis shows that the proposed algorithm outperforms the existing works in terms of query processing costs.
Recommendation systems become popular in our daily life. It is well known that the more the release of users' personal data, the better the quality of recommendation. However, such services raise serious privacy concerns for users. In this paper, focusing on matrix factorization-based recommendation systems, we propose the first privacy-preserving matrix factorization using fully homomorphic encryption. On inputs of encrypted users' ratings, our protocol performs matrix factorization over the encrypted data and returns encrypted outputs so that the recommendation system knows nothing on rating values and resulting user/item profiles. It provides a way to obfuscate the number and list of items a user rated without harming the accuracy of recommendation, and additionally protects recommender's tuning parameters for business benefit and allows the recommender to optimize the parameters for quality of service. To overcome performance degradation caused by the use of fully homomorphic encryption, we introduce a novel data structure to perform computations over encrypted vectors, which are essential operations for matrix factorization, through secure 2-party computation in part. With the data structure, the proposed protocol requires dozens of times less computation cost over those of previous works. Our experiments on a personal computer with 3.4 GHz 6-cores 64 GB RAM show that the proposed protocol runs in 1.5 minutes per iteration. It is more efficient than Nikolaenko et al.'s work proposed in CCS 2013, in which it took about 170 minutes on two servers with 1.9 GHz 16-cores 128 GB RAM.
The reconfiguration of FPGAs includes downloading the bit-stream file which contains the new design on the FPGA. The option to reconfigure FPGAs dynamically opens up the threat of stealing the Intellectual Property (IP) of the design. Since the configuration is usually stored in external memory, this can be easily tapped and read out by an eaves-dropper. This work presents a low cost solution in order to secure the reconfiguration of FPGAs. The proposed solution is based on an efficient-compact hardware implementation for AEGIS which is considered one of the candidates to the competition of CAESAR. The proposed architecture depends on using 1/4 AES-round for reducing the consumed area. We evaluated the presented design using 90 and 65 nm technologies. Our comparison to existing AES-based schemes reveals that the proposed design is better in terms of the hardware performance (Thr./mm2).