Biblio
In this paper, an overall introduction of fingerprint encryption algorithm is made, and then a fingerprint encryption algorithm with error correction is designed by adding error correction mechanism. This new fingerprint encryption algorithm can produce stochastic key in the form of multinomial coefficient by using the binary system sequencer, encrypt fingerprint, and use the Lagrange difference value to restore the multinomial during authenticating. Due to using the cyclic redundancy check code to find out the most accurate key, the accuracy of this algorithm can be ensured. Experimental result indicates that the fuzzy vault algorithm with error correction can well realize the template protection, and meet the requirements of biological information security protection. In addition, it also indicates that the system's safety performance can be enhanced by chanaing the key's length.
It is notably challenging to design an efficient and secure signature scheme based on error-correcting codes. An approach to build such signature schemes is to derive it from an identification protocol through the Fiat-Shamir transform. All such protocols based on codes must be run several rounds, since each run of the protocol allows a cheating probability of either 2/3 or 1/2. The resulting signature size is proportional to the number of rounds, thus making the 1/2 cheating probability version more attractive. We present a signature scheme based on double circulant codes in the rank metric, derived from an identification protocol with cheating probability of 2/3. We reduced this probability to almost 1/2 to obtain the smallest signature among code-based signature schemes based on the Fiat-Shamir paradigm, around 22 KBytes for 128 bit security level. Furthermore, among all code-based signature schemes, our proposal has the lowest value of signature plus public key size, and the smallest secret and public key sizes. We provide a security proof in the Random Oracle Model, implementation performances, and a comparison with the parameters of similar signature schemes.
Ze the quality of channels into either completely noisy or noieseless channels. This paper presents extrinsic information transfer (EXIT) analysis for iterative decoding of Polar codes to reveal the mechanism of channel transformation. The purpose of understanding the transformation process are to comprehend the placement process of information bit and frozen bit and to comprehend the security standard of Polar codes. Mutual information derived based on the concept of EXIT chart for check nodes and variable nodes of low density parity check (LDPC) codes and applied to Polar codes. This paper explores the quality of the polarized channels in finite blocklength. The finite block-length is of our interest since in the fifth telecommunications generation (5G) the block length is limited. This paper reveals the EXIT curve changes of Polar codes and explores the polarization characteristics, thus, high value of mutual informations for frozen bit are needed to be detectable. If it is the other way, the error correction capability of Polar codes would be drastically decreases. These results are expected to be a reference for developments of Polar codes for 5G technologies and beyond.
We propose a coding scheme for covert communication over additive white Gaussian noise channels, which extends a previous construction for discrete memoryless channels. We first show how sparse signaling with On-Off keying fails to achieve the covert capacity but that a modification allowing the use of binary phase-shift keying for "on" symbols recovers the loss. We then construct a modified pulse-position modulation scheme that, combined with multilevel coding, can achieve the covert capacity with low-complexity error-control codes. The main contribution of this work is to reconcile the tension between diffuse and sparse signaling suggested by earlier information-theoretic results.
As data security has become one of the most crucial issues in modern storage system/application designs, the data sanitization techniques are regarded as the promising solution on 3D NAND flash-memory-based devices. Many excellent works had been proposed to exploit the in-place reprogramming, erasure and encryption techniques to achieve and implement the sanitization functionalities. However, existing sanitization approaches could lead to performance, disturbance overheads or even deciphered issues. Different from existing works, this work aims at exploring an instantaneous data sanitization scheme by taking advantage of programming disturbance properties. Our proposed design can not only achieve the instantaneous data sanitization by exploiting programming disturbance and error correction code properly, but also enhance the performance with the recycling programming design. The feasibility and capability of our proposed design are evaluated by a series of experiments on 3D NAND flash memory chips, for which we have very encouraging results. The experiment results show that the proposed design could achieve the instantaneous data sanitization with low overhead; besides, it improves the average response time and reduces the number of block erase count by up to 86.8% and 88.8%, respectively.
Originally implemented by Google, QUIC gathers a growing interest by providing, on top of UDP, the same service as the classical TCP/TLS/HTTP/2 stack. The IETF will finalise the QUIC specification in 2019. A key feature of QUIC is that almost all its packets, including most of its headers, are fully encrypted. This prevents eavesdropping and interferences caused by middleboxes. Thanks to this feature and its clean design, QUIC is easier to extend than TCP. In this paper, we revisit the reliable transmission mechanisms that are included in QUIC. More specifically, we design, implement and evaluate Forward Erasure Correction (FEC) extensions to QUIC. These extensions are mainly intended for high-delays and lossy communications such as In-Flight Communications. Our design includes a generic FEC frame and our implementation supports the XOR, Reed-Solomon and Convolutional RLC error-correcting codes. We also conservatively avoid hindering the loss-based congestion signal by distinguishing the packets that have been received from the packets that have been recovered by the FEC. We evaluate its performance by applying an experimental design covering a wide range of delay and packet loss conditions with reproducible experiments. These confirm that our modular design allows the protocol to adapt to the network conditions. For long data transfers or when the loss rate and delay are small, the FEC overhead negatively impacts the download completion time. However, with high packet loss rates and long delays or smaller files, FEC allows drastically reducing the download completion time by avoiding costly retransmission timeouts. These results show that there is a need to use FEC adaptively to the network conditions.
We consider a setup in which the channel from Alice to Bob is less noisy than the channel from Eve to Bob. We show that there exist encoding and decoding which accomplish error correction and authentication simultaneously; that is, Bob is able to correctly decode a message coming from Alice and reject a message coming from Eve with high probability. The system does not require any secret key shared between Alice and Bob, provides information theoretic security, and can safely be composed with other protocols in an arbitrary context.
Error correction in quantum cryptography based on artificial neural networks is a new and promising solution. In this paper the security verification of this method is discussed and results of many simulations with different parameters are presented. The test scenarios assumed partially synchronized neural networks, typical for error rates in quantum cryptography. The results were also compared with scenarios based on the neural networks with random chosen weights to show the difficulty of passive attacks.