Biblio
The proposed frame describes two objectives one is to issue certificates through online and second is provide three level security through DNA cryptography. DNA Cryptography means converting the data to the DNA sequence. DNA is a succession comprising of four letters in order; A, C, G and T. every letter set is identified with a nucleotide. DNA can be used for store data, transmit the data and also used for computation of the data. This paper implemented 3 levels of cryptography. The receiver will apply the decryption for extracting the readable from the unreadable format. This DNA cryptography provide the security more than the other cryptography, but it takes more time complexity for generating the encoding and decoding and it has the chances to hacking the data by the hacker. So in this paper we implement the fast three level DNA Cryptography for me seva services.
The data security is a challenging issue nowadays with the increase of information capacity and its transmission rate. The most common and widely used techniques in the data security fields are cryptography and steganography. The combination of cryptography and steganography methods provides more security to the data. Now, DNA (Deoxyribonucleic Acid) is explored as a new carrier for data security since it achieves maximum protection and powerful security with high capacity and low modification rate. A new data security method can be developed by taking the advantages of DNA based AES (Advanced Encryption Standard) cryptography and DNA steganography. This new technique will provide multilayer security to the secret message. Here the secret message is first encoded to DNA bases then DNA based AES algorithm is applied to it. Finally the encrypted DNA will be concealed in another DNA sequence. This hybrid technique provides triple layer security to the secret message.
This article presents PrOLoc, a localization system that combines partially homomorphic encryption with a new way of structuring the localization problem to enable emcient and accurate computation of a target's location while preserving the privacy of the observers.
Privacy issues in recommender systems have attracted the attention of researchers for many years. So far, a number of solutions have been proposed. Unfortunately, most of them are far from practical as they either downgrade the utility or are very inefficient. In this paper, we aim at a more practical solution, by proposing a privacy-preserving hybrid recommender system which consists of an incremental matrix factorization (IMF) component and a user-based collaborative filtering (UCF) component. The IMF component provides the fundamental utility while it allows the service provider to efficiently learn feature vectors in plaintext domain, and the UCF component improves the utility while allows users to carry out their computations in an offline manner. Leveraging somewhat homomorphic encryption (SWHE) schemes, we provide privacy-preserving candidate instantiations for both components. Our experiments demonstrate that the hybrid solution is much more efficient than existing solutions.
Algorithms for oblivious random access machine (ORAM) simulation allow a client, Alice, to obfuscate a pattern of data accesses with a server, Bob, who is maintaining Alice's outsourced data while trying to learn information about her data. We present a novel ORAM scheme that improves the asymptotic I/O overhead of previous schemes for a wide range of size parameters for clientside private memory and message blocks, from logarithmic to polynomial. Our method achieves statistical security for hiding Alice's access pattern and, with high probability, achieves an I/O overhead that ranges from O(1) to O(log2 n/(log logn)2), depending on these size parameters, where n is the size of Alice's outsourced memory. Our scheme, which we call BIOS ORAM, combines multiple uses of B-trees with a reduction of ORAM simulation to isogrammic access sequences.
Supervisory Control and Data Acquisition(SCADA) communications are often subjected to various sophisticated cyber-attacks mostly because of their static system characteristics, enabling an attacker for easier profiling of the target system(s) and thereby impacting the Critical Infrastructures(CI). In this Paper, a novel approach to mitigate such static vulnerabilities is proposed by implementing a Moving Target Defense (MTD) strategy in a power grid SCADA environment, leveraging the existing communication network with an end-to-end IP-Hopping technique among trusted peers. The main contribution involves the design and implementation of MTD Architecture on Iowa State's PowerCyber testbed for targeted cyber-attacks, without compromising the availability of a SCADA system and studying the delay and throughput characteristics for different hopping rates in a realistic environment. Finally, we study two cases and provide mitigations for potential weaknesses of the proposed mechanism. Also, we propose to incorporate port mutation to further increase attack complexity as part of future work.
Distributed Denial of Service (DDoS) is a sophisticated cyber-attack due to its variety of types and techniques. The traditional mitigation method of this attack is to deploy dedicated security appliances such as firewall, load balancer, etc. However, due to the limited capacity of the hardware and the potential high volume of DDoS traffic, it may not be able to defend all the attacks. Therefore, cloud-based DDoS protection services were introduced to allow the organizations to redirect their traffic to the scrubbing centers in the cloud for filtering. This solution has some drawbacks such as privacy violation and latency. More recently, Network Functions Virtualization (NFV) and edge computing have been proposed as new networking service models. In this paper, we design a framework that leverages NFV and edge computing for DDoS mitigation through two-stage processes.
Privacy preserving on data publication has been an important research field over the past few decades. One of the fundamental challenges in privacy preserving data publication is the trade-off problem between privacy and utility of the single and independent data set. However, recent research works have shown that the advanced privacy mechanism, i.e., differential privacy, is vulnerable when multiple data sets are correlated. In this case, the trade-off problem between privacy and utility is evolved into a game problem, in which the payoff of each player is dependent not only on his privacy parameter, but also on his neighbors' privacy parameters. In this paper, we firstly present the definition of correlated differential privacy to evaluate the real privacy level of a single data set influenced by the other data sets. Then, we construct a game model of multiple players, who each publishes the data set sanitized by differential privacy. Next, we analyze the existence and uniqueness of the pure Nash Equilibrium and demonstrate the sufficient conditions in the game. Finally, we refer to a notion, i.e., the price of anarchy, to evaluate efficiency of the pure Nash Equilibrium.
The Cloud Computing is a developing IT concept that faces some issues, which are slowing down its evolution and adoption by users across the world. The lack of security has been the main concern. Organizations and entities need to ensure, inter alia, the integrity and confidentiality of their outsourced sensible data within a cloud provider server. Solutions have been examined in order to strengthen security models (strong authentication, encryption and fragmentation before storing, access control policies...). More particularly, data remanence is undoubtedly a major threat. How could we be sure that data are, when is requested, truly and appropriately deleted from remote servers? In this paper, we aim to produce a survey about this interesting subject and to address the problem of residual data in a cloud-computing environment, which is characterized by the use of virtual machines instantiated in remote servers owned by a third party.
With the growing number of cyberattack incidents, organizations are required to have proactive knowledge on the cybersecurity landscape for efficiently defending their resources. To achieve this, organizations must develop the culture of sharing their threat information with others for effectively assessing the associated risks. However, sharing cybersecurity information is costly for the organizations due to the fact that the information conveys sensitive and private data. Hence, making the decision for sharing information is a challenging task and requires to resolve the trade-off between sharing advantages and privacy exposure. On the other hand, cybersecurity information exchange (CYBEX) management is crucial in stabilizing the system through setting the correct values for participation fees and sharing incentives. In this work, we model the interaction of organizations, CYBEX, and attackers involved in a sharing system using dynamic game. With devising appropriate payoff models for each player, we analyze the best strategies of the entities by incorporating the organizations' privacy component in the sharing model. Using the best response analysis, the simulation results demonstrate the efficiency of our proposed framework.
In the recent years many companies are shifting towards cloud for expanding their business profit with least additional cost. Cloud computing is a growing technology which has emerged from the development of grid computing, virtualization and utility computing. Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources like networks, servers, storage, applications, and services that can be rapidly provisioned and released with minimal management effort or service provider interaction. There was a huge data loss during the recent Chennai floods during Dec 2015. If these data would have been stored at distributed data centers great loss could have been prevented. Though, such natural calamities are tempting many users to shift towards the cloud storage, security threats are inhibiting them to shift towards the cloud. Many solutions have been addressed for these security issues but they do not give guaranteed security. By guaranteed security we mean confidentiality, integrity and availability. Some of the existing techniques for providing security are Cryptographic Protocols, Data Sanitization, Predicate Logic, Access Control Mechanism, Honeypots, Sandboxing, Erasure Coding, RAID(Redundant Arrays of Independent Disks), Homomorphic Encryption and Split-Key Encryption. All these techniques either cannot work alone or adds computational and time complexity. An alternate scheme of combining encryption and channel coding schemes at one-go is proposed for increasing the levels of security. Hybrid encryption scheme is proposed to be used in the interleaver block of Turbo coder for avoiding burst error. Hybrid encryption avoids sharing of secret key via the unsecured channel. This provides both security and reliability by reducing error propagation effect with small additional cost and computational overhead. Time complexity can be reduced when encryption and encoding are done as a single process.
In the present time, there has been a huge increase in large data repositories by corporations, governments, and healthcare organizations. These repositories provide opportunities to design/improve decision-making systems by mining trends and patterns from the data set (that can provide credible information) to improve customer service (e.g., in healthcare). As a result, while data sharing is essential, it is an obligation to maintaining the privacy of the data donors as data custodians have legal and ethical responsibilities to secure confidentiality. This research proposes a 2-layer privacy preserving (2-LPP) data sanitization algorithm that satisfies ε-differential privacy for publishing sanitized data. The proposed algorithm also reduces the re-identification risk of the sanitized data. The proposed algorithm has been implemented, and tested with two different data sets. Compared to other existing works, the results obtained from the proposed algorithm show promising performance.
Compared to other remote attestation methods, the binary-based approach is the most direct and complete one, but privacy protection has become an important problem. In this paper, we presented an Extended Hash Algorithm (EHA) for privacy protection based on remote attestation method. Based on the traditional Merkle Hash Tree, EHA altered the algorithm of node connection. The new algorithm could ensure the same result in any measure order. The security key is added when the node connection calculation is performed, which ensures the security of the value calculated by the Merkle node. By the final analysis, we can see that the remote attestation using EHA has better privacy protection and execution performance compared to other methods.
Privacy and security have been discussed in many occasions and in most cases, the importance that these two aspects play on the information system domain are mentioned often. Many times, research is carried out on the individual information security or privacy measures where it is commonly regarded with the focus on the particular measure or both privacy and security are regarded as a whole subject. However, there have been no attempts at establishing a proper method in categorizing any form of objects of protection. Through the review done on this paper, we would like to investigate the relationship between privacy and security and form a break down the aspects of privacy and security in order to provide better understanding through determining if a measure or methodology is security, privacy oriented or both. We would recommend that in further research, a further refined formulation should be formed in order to carry out this determination process. As a result, we propose a Privacy-Security Tree (PST) in this paper that distinguishes the privacy from security measures.
Security issues in the IoT based CPS are exacerbated with human participation in CPHS due to the vulnerabilities in both the technologies and the human involvement. A holistic framework to mitigate security threats in the IoT-based CPHS environment is presented to mitigate these issues. We have developed threat model involving human elements in the CPHS environment. Research questions, directions, and ideas with respect to securing IoT based CPHS against collaborative attacks are presented.
After a brief introduction on optical chaotic cryptography, we compare the standard short cavity, close-loop, two-laser and three-laser schemes for secure transmission, showing that both are suitable for secure data exchange, the three-laser scheme offering a slightly better level of privacy, due to its symmetrical topology.
Interconnect opens are known to be one of the predominant defects in nanoscale technologies. Automatic test pattern generation for open faults is challenging, because of their rather unstable behavior and the numerous electrical parameters which need to be considered. Thus, most approaches try to avoid accurate modeling of all constraints like the influence of the aggressors on the open net and use simplified fault models in order to detect as many faults as possible or make assumptions which decrease both complexity and accuracy. Yet, this leads to the problem that not only generated tests may be invalidated but also the localization of a specific fault may fail - in case such a model is used as basis for diagnosis. Furthermore, most of the models do not consider the problem of oscillating behavior, caused by feedback introduced by coupling capacitances, which occurs in almost all designs. In [1], the Robust Enhanced Aggressor Victim Model (REAV) and in [2] an extension to address the problem of oscillating behavior were introduced. The resulting model does not only consider the influence of all aggressors accurately but also guarantees robustness against oscillating behavior as well as process variations affecting the thresholds of gates driven by an open interconnect. In this work we present the first diagnostic classification algorithm for this model. This algorithm considers all constraints enforced by the REAV model accurately - and hence handles unknown values as well as oscillating behavior. In addition, it allows to distinguish faults at the same interconnect and thus reducing the area that has to be considered for physical failure analysis. Experimental results show the high efficiency of the new method handling circuits with up to 500,000 non-equivalent faults and considerably increasing the diagnostic resolution.
Under the Internet of Things (IoT), the coexistence proof of multiple RFID tagged objects becomes a very useful mechanism in many application areas such as health care, evidences in court, and stores. The yoking-proof scheme addresses this issue. However, all existing yoking-proof schemes require two or more rounds communication to generate the yoking-proof. In this paper, we investigate the design of one-round yoking-proof schemes. Our contributions are threefold: (1) to confirm the coexistence of the RFID tag pair, we propose a one-round offline yoking-proof scheme with privacy protection. (2) We define a privacy model of the yoking-proof scheme and enhance Moriyama's security model for the yoking-proof scheme. The security and the privacy of the proposed scheme are proved under our models. (3) We further extend the yoking-proof scheme for the coexistence of m RFID tags, where m\textbackslashtextgreater2. The extended scheme maintains one-round. In addition, the proposed technique has efficiency advantage, compared with previous work.
Radio Frequency IDentification(RFID) is one of the most important sensing techniques for Internet of Things(IoT) and RFID systems have been applied to various different fields. But an RFID system usually uses open wireless radio wave to communicate and this will lead to a serious threat to its privacy and security. The current popular RFID tags are some low-cost passive tags. Their computation and storage resources are very limited. It is not feasible for them to complete some complicated cryptographic operations. So it is very difficult to protect the security and privacy of an RFID system. Lightweight authentication protocol is considered as an effective approach. Many typical authentication protocols usually use Hash functions so that they require more computation and storage resources. Based on CRC function, we propose a lightweight RFID authentication protocol, which needs less computation and storage resources than Hash functions. This protocol exploits an on-chip CRC function and a pseudorandom number generator to ensure the anonymity and freshness of communications between reader and tag. It provides forward security and confidential communication. It can prevent eavesdropping, location trace, replay attack, spoofing and DOS-attack effectively. It is very suitable to be applied to RFID systems.
This work investigates the anonymous tag cardinality estimation problem in radio frequency identification systems with frame slotted aloha-based protocol. Each tag, instead of sending its identity upon receiving the reader's request, randomly responds by only one bit in one of the time slots of the frame due to privacy and security. As a result, each slot with no response is observed as in an empty state, while the others are non-empty. Those information can be used for the tag cardinality estimation. Nevertheless, under effects of fading and noise, time slots with tags' response might be observed as empty, while those with no response might be detected as non-empty, which is known as a false detection phenomenon. The performance of conventional estimation methods is, thus, degraded because of inaccurate observations. In order to cope with this issue, we propose a new estimation algorithm using expectation-maximization method. Both the tag cardinality and a probability of false detection are iteratively estimated to maximize a likelihood function. Computer simulations will be provided to show the merit of the proposed method.
Radio frequency identification (RFID) is one of the key technologies of Internet of Things, which have many security issues in an open environment. In order to solve the communication problem between RFID tags and readers, security protocols has been improved constantly as the first choice. But the form of attack is also changing constantly with the development of technology. In this paper we classify the security protocols and introduce some problems in the recent security protocols.