Biblio
Despite decades of research on the Internet security, we constantly hear about mega data breaches and malware infections affecting hundreds of millions of hosts. The key reason is that the current threat model of the Internet relies on two assumptions that no longer hold true: (1) Web servers, hosting the content, are secure, (2) each Internet connection starts from the original content provider and terminates at the content consumer. Internet security is today merely patched on top of the TCP/IP protocol stack. In order to achieve comprehensive security for the Internet, we believe that a clean-slate approach must be adopted where a content based security model is employed. Named Data Networking (NDN) is a step in this direction which is envisioned to be the next generation Internet architecture based on a content centric communication model. NDN is currently being designed with security as a key requirement, and thus to support content integrity, authenticity, confidentiality and privacy. However, in order to meet such a requirement, one needs to overcome several challenges, especially in either large operational environments or resource constrained networks. In this paper, we explore the security challenges in achieving comprehensive content security in NDN and propose a research agenda to address some of the challenges.
Nowadays, Vehicular ad hoc network confronts many challenges in terms of security and privacy, due to the fact that data transmitted are diffused in an open access environment. However, highest of drivers want to maintain their information discreet and protected, and they do not want to share their confidential information. So, the private information of drivers who are distributed in this network must be protected against various threats that may damage their privacy. That is why, confidentiality, integrity and availability are the important security requirements in VANET. This paper focus on security threat in vehicle network especially on the availability of this network. Then we regard the rational attacker who decides to lead an attack based on its adversary's strategy to maximize its own attack interests. Our aim is to provide reliability and privacy of VANET system, by preventing attackers from violating and endangering the network. to ensure this objective, we adopt a tree structure called attack tree to model the attacker's potential attack strategies. Also, we join the countermeasures to the attack tree in order to build attack-defense tree for defending these attacks.
Nowadays, the Internet is developed, so that the requirements for on- and offline data storage have increased. Large storage IT projects, are related to large costs and high level of business risk. A storage service provider (SSP) provides computer storage space and management. In addition to that, it offers also back-up and archiving. Despite this, many companies fears security, privacy and integrity of outsourced data. As a solution, File Assured Deletion (FADE) is a system built upon standard cryptographic issues. It aims to guarantee their privacy and integrity, and most importantly, assuredly deleted files to make them unrecoverable to anybody (including those who manage the cloud storage) upon revocations of file access policies, by encrypting outsourced data files. Unfortunately, This system remains weak, in case the key manager's security is compromised. Our work provides a new scheme that aims to improve the security of FADE by using the TPM (Trusted Platform Module) that stores safely keys, passwords and digital certificates.
Now a day's cloud technology is a new example of computing that pays attention to more computer user, government agencies and business. Cloud technology brought more advantages particularly in every-present services where everyone can have a right to access cloud computing services by internet. With use of cloud computing, there is no requirement for physical servers or hardware that will help the computer system of company, networks and internet services. One of center services offered by cloud technology is storing the data in remote storage space. In the last few years, storage of data has been realized as important problems in information technology. In cloud computing data storage technology, there are some set of significant policy issues that includes privacy issues, anonymity, security, government surveillance, telecommunication capacity, liability, reliability and among others. Although cloud technology provides a lot of benefits, security is the significant issues between customer and cloud. Normally cloud computing technology has more customers like as academia, enterprises, and normal users who have various incentives to go to cloud. If the clients of cloud are academia, security result on computing performance and for this types of clients cloud provider's needs to discover a method to combine performance and security. In this research paper the more significant issue is security but with diverse vision. High performance might be not as dangerous for them as academia. In our paper, we design an efficient secure and verifiable outsourcing protocol for outsourcing data. We develop extended QP problem protocol for storing and outsourcing a data securely. To achieve the data security correctness, we validate the result returned through the cloud by Karush\_Kuhn\_Tucker conditions that are sufficient and necessary for the most favorable solution.
Cloud storage can provide outsourcing data services for both organizations and individuals. However, cloud storage still faces many challenges, e.g., public integrity auditing, the support of dynamic data, and low computational audit cost. To solve the problems, a number of techniques have been proposed. Recently, Tian et al. proposed a novel public auditing scheme for secure cloud storage based on a new data structure DHT. The authors claimed that their scheme was proven to be secure. Unfortunately, through our security analysis, we find that the scheme suffers from one attack and one security shortage. The attack is that an adversary can forge the data to destroy the correctness of files without being detected. The shortage of the scheme is that the updating operations for data blocks is vulnerable and easy to be modified. Finally, we give our countermeasures to remedy the security problems.
Complex safety-critical devices require dependable communication. Dependability includes confidentiality and integrity as much as safety. Encrypting gateways with demilitarized zones, Multiple Independent Levels of Security architectures and the infamous Air Gap are diverse integration patterns for safety-critical infrastructure. Though resource restricted embedded safety devices still lack simple, certifiable, and efficient cryptography implementations. Following the recommended formal methods approach for safety-critical devices, we have implemented proven cryptography algorithms in the qualified model based language Scade as the Safety Leveraged Implementation of Data Encryption (SLIDE) library. Optimization for the synchronous dataflow language is discussed in the paper. The implementation for public-key based encryption and authentication is evaluated for real-world performance. The feasibility is shown by execution time benchmarks on an industrial safety microcontroller platform running a train control safety application.
The symmetric block ciphers, which represent a core element for building cryptographic communications systems and protocols, are used in providing message confidentiality, authentication and integrity. Various limitations in hardware and software resources, especially in terminal devices used in mobile communications, affect the selection of appropriate cryptosystem and its parameters. In this paper, an implementation of three symmetric ciphers (DES, 3DES, AES) used in different operating modes are analyzed on Android platform. The cryptosystems' performance is analyzed in different scenarios using several variable parameters: cipher, key size, plaintext size and number of threads. Also, the influence of parallelization supported by multi-core CPUs on cryptosystem performance is analyzed. Finally, some conclusions about the parameter selection for optimal efficiency are given.
Recent proposals for trusted hardware platforms, such as Intel SGX and the MIT Sanctum processor, offer compelling security features but lack formal guarantees. We introduce a verification methodology based on a trusted abstract platform (TAP), a formalization of idealized enclave platforms along with a parameterized adversary. We also formalize the notion of secure remote execution and present machine-checked proofs showing that the TAP satisfies the three key security properties that entail secure remote execution: integrity, confidentiality and secure measurement. We then present machine-checked proofs showing that SGX and Sanctum are refinements of the TAP under certain parameterizations of the adversary, demonstrating that these systems implement secure enclaves for the stated adversary models.
As the use of cloud computing and autonomous computing increases, integrity verification of the software stack used in a system becomes a critical issue. In this paper, we analyze the internal behavior of IMA (Integrity Measurement Architecture), one of the most well-known integrity verification frameworks employed in the Linux kernel. For integrity verification, IMA measures all executables and their configuration files in a trusty manner using TPM (Trust Platform Module). Our analysis reveals that there are two obstacles in IMA, measurement overhead and nondeterminism. To address these problems, we propose two novel techniques, called batch extend and core measurement. The former is a technique that accumulates the measured values of executables/files and extends them into TPM in a batch fashion. The second technique measures some specified executables/files only so that it verifies the core integrity of a system in which a user or a remote party is interested. Real implementation based evaluation shows that our proposal can reduce the booting time from 122 to 23 seconds, while supporting the same integrity verification capability of the default IMA policy.
Verifying the integrity of outsourced data is a classic, well-studied problem. However current techniques have fundamental performance and concurrency limitations for update-heavy workloads. In this paper, we investigate the potential advantages of deferred and batched verification rather than the per-operation verification used in prior work. We present Concerto, a comprehensive key-value store designed around this idea. Using Concerto, we argue that deferred verification preserves the utility of online verification and improves concurrency resulting in orders-of-magnitude performance improvement. On standard benchmarks, the performance of Concerto is within a factor of two when compared to state-of-the-art key-value stores without integrity.
Delegated authorization protocols have become wide-spread to implement Web applications and services, where some popular providers managing people identity information and personal data allow their users to delegate third party Web services to access their data. In this paper, we analyze the risks related to untrusted providers not behaving correctly, and we solve this problem by proposing the first verifiable delegated authorization protocol that allows third party services to verify the correctness of users data returned by the provider. The contribution of the paper is twofold: we show how delegated authorization can be cryptographically enforced through authenticated data structures protocols, we extend the standard OAuth2 protocol by supporting efficient and verifiable delegated authorization including database updates and privileges revocation.
Data assurance and resilience are crucial security issues in cloud-based IoT applications. With the widespread adoption of drones in IoT scenarios such as warfare, agriculture and delivery, effective solutions to protect data integrity and communications between drones and the control system have been in urgent demand to prevent potential vulnerabilities that may cause heavy losses. To secure drone communication during data collection and transmission, as well as preserve the integrity of collected data, we propose a distributed solution by utilizing blockchain technology along with the traditional cloud server. Instead of registering the drone itself to the blockchain, we anchor the hashed data records collected from drones to the blockchain network and generate a blockchain receipt for each data record stored in the cloud, reducing the burden of moving drones with the limit of battery and process capability while gaining enhanced security guarantee of the data. This paper presents the idea of securing drone data collection and communication in combination with a public blockchain for provisioning data integrity and cloud auditing. The evaluation shows that our system is a reliable and distributed system for drone data assurance and resilience with acceptable overhead and scalability for a large number of drones.
Cloud computing enables the outsourcing of big data analytics, where a third-party server is responsible for data management and processing. In this paper, we consider the outsourcing model in which a third-party server provides record matching as a service. In particular, given a target record, the service provider returns all records from the outsourced dataset that match the target according to specific distance metrics. Identifying matching records in databases plays an important role in information integration and entity resolution. A major security concern of this outsourcing paradigm is whether the service provider returns the correct record matching results. To solve the problem, we design EARRING, an Efficient Authentication of outsouRced Record matchING framework. EARRING requires the service provider to construct the verification object (VO) of the record matching results. From the VO, the client is able to catch any incorrect result with cheap computational cost. Experiment results on real-world datasets demonstrate the efficiency of EARRING.
SoCs implementing security modules should be both testable and secure. Oversights in a design's test structure could expose internal modules creating security vulnerabilities during test. In this paper, for the first time, we propose a novel automated security vulnerability analysis framework to identify violations of confidentiality, integrity, and availability policies caused by test structures and designer oversights during SoC integration. Results demonstrate existing information leakage vulnerabilities in implementations of various encryption algorithms and secure microprocessors. These can be exploited to obtain secret keys, control finite state machines, or gain unauthorized access to memory read/write functions.
Cyber-Physical Systems (CPS) represent a fundamental link between information technology (IT) systems and the devices that control industrial production and maintain critical infrastructure services that support our modern world. Increasingly, the interconnections among CPS and IT systems have created exploitable security vulnerabilities due to a number of factors, including a legacy of weak information security applications on CPS and the tendency of CPS operators to prioritize operational availability at the expense of integrity and confidentiality. As a result, CPS are subject to a number of threats from cyber attackers and cyber-physical attackers, including denial of service and even attacks against the integrity of the data in the system. The effects of these attacks extend beyond mere loss of data or the inability to access information system services. Attacks against CPS can cause physical damage in the real world. This paper reviews the challenges of providing information assurance services for CPS that operate critical infrastructure systems and industrial control systems. These methods are thorough measures to close integrity and confidentiality gaps in CPS and processes to highlight the security risks that remain. This paper also outlines approaches to reduce the overhead and complexity for security methods, as well as examine novel approaches, including covert communications channels, to increase CPS security.
H.264/SVC (Scalable Video Coding) codestreams, which consist of a single base layer and multiple enhancement layers, are designed for quality, spatial, and temporal scalabilities. They can be transmitted over networks of different bandwidths and seamlessly accessed by various terminal devices. With a huge amount of video surveillance and various devices becoming an integral part of the security infrastructure, the industry is currently starting to use the SVC standard to process digital video for surveillance applications such that clients with different network bandwidth connections and display capabilities can seamlessly access various SVC surveillance (sub)codestreams. In order to guarantee the trustworthiness and integrity of received SVC codestreams, engineers and researchers have proposed several authentication schemes to protect video data. However, existing algorithms cannot simultaneously satisfy both efficiency and robustness for SVC surveillance codestreams. Hence, in this article, a highly efficient and robust authentication scheme, named TrustSSV (Trust Scalable Surveillance Video), is proposed. Based on quality/spatial scalable characteristics of SVC codestreams, TrustSSV combines cryptographic and content-based authentication techniques to authenticate the base layer and enhancement layers, respectively. Based on temporal scalable characteristics of surveillance codestreams, TrustSSV extracts, updates, and authenticates foreground features for each access unit dynamically with background model support. Using SVC test sequences, our experimental results indicate that the scheme is able to distinguish between content-preserving and content-changing manipulations and to pinpoint tampered locations. Compared with existing schemes, the proposed scheme incurs very small computation and communication costs.
This paper analyzes the authenticated encryption algorithm ACORN, a candidate in the CAESAR cryptographic competition. We identify weaknesses in the state update function of ACORN which result in collisions in the internal state of ACORN. This paper shows that for a given set of key and initialization vector values we can construct two distinct input messages which result in a collision in the ACORN internal state. Using a standard PC the collision can be found almost instantly when the secret key is known. This flaw can be used by a message sender to create a forged message which will be accepted as legitimate.
Mobile Ad-Hoc Network is a wireless networking exemplar of mobile hosts which are connected by wireless links without usual routing infrastructure and link fixed routers. Dynamic Source Routing (DSR) is one of the extensively used routing protocol for packet transfer from source to destination. It relies on maintaining most recent information, for which, each adhoc node maintains hop count and sequence number field. They are vulnerable to security attacks due to their mutable nature. Analogously, routing updates are transmitted in clear text, which again poses a security hazard. In this paper, we will propose an improved version of DSR routing protocol using Homomorphic Encryption Scheme which prevents pollution attack and accomplishes in maintaining Integrity Security Standard by following minimum hop count path. HDSR routing scheme is evaluated by simulation and results show that improved throughput and ETE delay can be obtained.
So far, cloud storage has been accepted by an increasing number of people, which is not a fresh notion any more. It brings cloud users a lot of conveniences, such as the relief of local storage and location independent access. Nevertheless, the correctness and completeness as well as the privacy of outsourced data are what worry could users. As a result, most people are unwilling to store data in the cloud, in case that the sensitive information concerning something important is disclosed. Only when people feel worry-free, can they accept cloud storage more easily. Certainly, many experts have taken this problem into consideration, and tried to solve it. In this paper, we survey the solutions to the problems concerning auditing in cloud computing and give a comparison of them. The methods and performances as well as the pros and cons are discussed for the state-of-the-art auditing protocols.