Biblio
In recent years, real-world attacks against PKI take place frequently. For example, malicious domains' certificates issued by compromised CAs are widespread, and revoked certificates are still trusted by clients. In spite of a lot of research to improve the security of SSL/TLS connections, there are still some problems unsolved. On one hand, although log-based schemes provided certificate audit service to quickly detect CAs' misbehavior, the security and data consistency of log servers are ignored. On the other hand, revoked certificates checking is neglected due to the incomplete, insecure and inefficient certificate revocation mechanisms. Further, existing revoked certificates checking schemes are centralized which would bring safety bottlenecks. In this paper, we propose a blockchain-based public and efficient audit scheme for TLS connections, which is called Certchain. Specially, we propose a dependability-rank based consensus protocol in our blockchain system and a new data structure to support certificate forward traceability. Furthermore, we present a method that utilizes dual counting bloom filter (DCBF) with eliminating false positives to achieve economic space and efficient query for certificate revocation checking. The security analysis and experimental results demonstrate that CertChain is suitable in practice with moderate overhead.
The security of web communication via the SSL/TLS protocols relies on safe distributions of public keys associated with web domains in the form of X.509 certificates. Certificate authorities (CAs) are trusted third parties that issue these certificates. However, the CA ecosystem is fragile and prone to compromises. Starting with Google's Certificate Transparency project, a number of research works have recently looked at adding transparency for better CA accountability, effectively through public logs of all certificates issued by certification authorities, to augment the current X.509 certificate validation process into SSL/TLS. In this paper, leveraging recent progress in blockchain technology, we propose a novel system, called CTB, that makes it impossible for a CA to issue a certificate for a domain without obtaining consent from the domain owner. We further make progress to equip CTB with certificate revocation mechanism. We implement CTB using IBM's Hyperledger Fabric blockchain platform. CTB's smart contract, written in Go, is provided for complete reference.
Cryptography and encryption is a topic that is blurred by its complexity making it difficult for the majority of the public to easily grasp. The focus of our research is based on SSL technology involving CAs, a centralized system that manages and issues certificates to web servers and computers for validation of identity. We first explain how the certificate provides a secure connection creating a trust between two parties looking to communicate with one another over the internet. Then the paper goes into what happens when trust is compromised and how information that is being transmitted could possibly go into the hands of the wrong person. We are proposing a browser plugin, Certificate Authority Rescue Engine (CAre), to serve as an added source of security with simplicity and visibility. In order to see why CAre will be an added benefit to average and technical users of the internet, one must understand what website security entails. Therefore, this paper will dive deep into website security through the use of public key infrastructure and its core components; certificates, certificate authorities, and their relationship with web browsers.
Trust in SSL-based communications is provided by Certificate Authorities (CAs) in the form of signed certificates. Checking the validity of a certificate involves three steps: (i) checking its expiration date, (ii) verifying its signature, and (iii) ensuring that it is not revoked. Currently, such certificate revocation checks are done either via Certificate Revocation Lists (CRLs) or Online Certificate Status Protocol (OCSP) servers. Unfortunately, despite the existence of these revocation checks, sophisticated cyber-attackers, may trick web browsers to trust a revoked certificate, believing that it is still valid. Consequently, the web browser will communicate (over TLS) with web servers controlled by cyber-attackers. Although frequently updated, nonced, and timestamped certificates may reduce the frequency and impact of such cyber-attacks, they impose a very large overhead to the CAs and OCSP servers, which now need to timestamp and sign on a regular basis all the responses, for every certificate they have issued, resulting in a very high overhead. To mitigate this overhead and provide a solution to the described cyber-attacks, we present CCSP: a new approach to provide timely information regarding the status of certificates, which capitalizes on a newly introduced notion called signed collections. In this paper, we present the design, preliminary implementation, and evaluation of CCSP in general, and signed collections in particular. Our preliminary results suggest that CCSP (i) reduces space requirements by more than an order of magnitude, (ii) lowers the number of signatures required by 6 orders of magnitude compared to OCSP-based methods, and (iii) adds only a few milliseconds of overhead in the overall user latency.
Increasing interest in cyber-physical systems with integrated computational and physical capabilities that can interact with humans can be identified in research and practice. Since these systems can be classified as safety- and security-critical systems the need for safety and security assurance and certification will grow. Moreover, these systems are typically characterized by fragmentation, interconnectedness, heterogeneity, short release cycles, cross organizational nature and high interference between safety and security requirements. These properties combined with the assurance of compliance to multiple standards, carrying out certification and re-certification, and the lack of an approach to model, document and integrate safety and security requirements represent a major challenge. In order to address this gap we developed a domain agnostic approach to model security and safety requirements in an integrated view to support certification processes during design and run-time phases of cyber-physical systems.
Because of the nature of vehicular communications, security is a crucial aspect, involving the continuous development and analysis of the existing security architectures and punctual theoretical and practical aspects that have been proposed and are in need of continuous updates and integrations with newer technologies. But before an update, a good knowledge of the current aspects is mandatory. Identifying weaknesses and anticipating possible risks of vehicular communication networks through a failure modes and effects analysis (FMEA) represent an important aspect of the security analysis process and a valuable step in finding efficient security solutions for all kind of problems that might occur in these systems.
Recently perceived vulnerabilities in public key infrastructures (PKI) demand that a semantic or cognitive definition of trust is essential for augmenting the security through trust formulations. In this paper, we examine the meaning of trust in PKIs. Properly categorized trust can help in developing intelligent algorithms that can adapt to the security and privacy requirements of the clients. We delineate the different types of trust in a generic PKI model.
Advanced Metering Infrastructure (AMI) forms a communication network for the collection of power data from smart meters in Smart Grid. As the communication within an AMI needs to be secure, key management becomes an issue due to overhead and limited resources. While using public-keys eliminate some of the overhead of key management, there is still challenges regarding certificates that store and certify the public-keys. In particular, distribution and storage of certificate revocation list (CRL) is major a challenge due to cost of distribution and storage in AMI networks which typically consist of wireless multi-hop networks. Motivated by the need of keeping the CRL distribution and storage cost effective and scalable, in this paper, we present a distributed CRL management model utilizing the idea of distributed hash trees (DHTs) from peer-to-peer (P2P) networks. The basic idea is to share the burden of storage of CRLs among all the smart meters by exploiting the meshing capability of the smart meters among each other. Thus, using DHTs not only reduces the space requirements for CRLs but also makes the CRL updates more convenient. We implemented this structure on ns-3 using IEEE 802.11s mesh standard as a model for AMI and demonstrated its superior performance with respect to traditional methods of CRL management through extensive simulations.
Evolution in software systems is a necessary activity that occurs due to fixing bugs, adding functionality or improving system quality. Systems often need to be shown to comply with regulatory standards. Along with demonstrating compliance, an artifact, called an assurance case, is often produced to show that the system indeed satisfies the property imposed by the standard (e.g., safety, privacy, security, etc.). Since each of the system, the standard, and the assurance case can be presented as a model, we propose the extension and use of traditional model management operators to aid in the reuse of parts of the assurance case when the system undergoes an evolution. Specifically, we present a model management approach that eventually produces a partial evolved assurance case and guidelines to help the assurance engineer in completing it. We demonstrate how our approach works on an automotive subsystem regulated by the ISO 26262 standard.
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.
Numerous cloud service certifications (CSCs) are emerging in practice. However, in their striving to establish the market standard, CSC initiatives proceed independently, resulting in a disparate collection of CSCs that are predominantly proprietary, based on various standards, and differ in terms of scope, audit process, and underlying certification schemes. Although literature suggests that a certification's design influences its effectiveness, research on CSC design is lacking and there are no commonly agreed structural characteristics of CSCs. Informed by data from 13 expert interviews and 7 cloud computing standards, this paper delineates and structures CSC knowledge by developing a taxonomy for criteria to be assessed in a CSC. The taxonomy consists of 6 dimensions with 28 subordinate characteristics and classifies 328 criteria, thereby building foundations for future research to systematically develop and investigate the efficacy of CSC designs as well as providing a knowledge base for certifiers, cloud providers, and users.
Different organizations or countries maybe adopt different PKI trust model in real applications. On a large scale, all certification authorities (CA) and end entities construct a huge mesh network. PKI trust model exhibits unstructured mesh network as a whole. However, mesh trust model worsens computational complexity in certification path processing when the number of PKI domains increases. This paper proposes an enhanced mesh trust model for PKI. Keys generation and signature are fulfilled in Trusted Platform Module (TPM) for higher security level. An algorithm is suggested to improve the performance of certification path processing in this model. This trust model is less complex but more efficient and robust than the existing PKI trust models.