Biblio
In this paper, we introduce an optical network with cross-layer security, which can enhance security performance. In the transmitter, the user's data is encrypted at first. After that, based on optical encoding, physical layer encryption is implemented. In the receiver, after the corresponding optical decoding process, decryption algorithm is used to restore user's data. In this paper, the security performance has been evaluated quantitatively.
Audio Steganography is the technique of hiding any secret information behind a cover audio file without impairing its quality. Data hiding in audio signals has various applications such as secret communications and concealing data that may influence the security and safety of governments and personnel and has possible important applications in 5G communication systems. This paper proposes an efficient secure steganography scheme based on the high correlation between successive audio signals. This is similar to the case of differential pulse coding modulation technique (DPCM) where encoding uses the redundancy in sample values to encode the signals with lower bit rate. Discrete Wavelet Transform (DWT) of audio samples is used to store hidden data in the least important coefficients of Haar transform. We use the benefit of the small differences between successive samples generated from encoding of the cover audio signal wavelet coefficients to hide image data without making a remarkable change in the cover audio signal. instead of changing of actual audio samples so this doesn't perceptually degrade the audio signal and provides higher hiding capacity with lower distortion. To further increase the security of the image hiding process, the image to be hidden is divided into blocks and the bits of each block are XORed with a different random sequence of logistic maps using hopping technique. The performance of the proposed algorithm has been estimated extensively against attacks and experimental results show that the proposed method achieves good robustness and imperceptibility.
The best practice to prevent Cross Site Scripting (XSS) attacks is to apply encoders to sanitize untrusted data. To balance security and functionality, encoders should be applied to match the web page context, such as HTML body, JavaScript, and style sheets. A common programming error is the use of a wrong encoder to sanitize untrusted data, leaving the application vulnerable. We present a security unit testing approach to detect XSS vulnerabilities caused by improper encoding of untrusted data. Unit tests for the XSS vulnerability are automatically constructed out of each web page and then evaluated by a unit test execution framework. A grammar-based attack generator is used to automatically generate test inputs. We evaluate our approach on a large open source medical records application, demonstrating that we can detect many 0-day XSS vulnerabilities with very low false positives, and that the grammar-based attack generator has better test coverage than industry best practices.
This paper focuses on one type of Covert Storage Channel (CSC) that uses the 6-bit TCP flag header in TCP/IP network packets to transmit secret messages between accomplices. We use relative entropy to characterize the irregularity of network flows in comparison to normal traffic. A normal profile is created by the frequency distribution of TCP flags in regular traffic packets. In detection, the TCP flag frequency distribution of network traffic is computed for each unique IP pair. In order to evaluate the accuracy and efficiency of the proposed method, this study uses real regular traffic data sets as well as CSC messages using coding schemes under assumptions of both clear text, composed by a list of keywords common in Unix systems, and encrypted text. Moreover, smart accomplices may use only those TCP flags that are ever appearing in normal traffic. Then, in detection, the relative entropy can reveal the dissimilarity of a different frequency distribution from this normal profile. We have also used different data processing methods in detection: one method summarizes all the packets for a pair of IP addresses into one flow and the other uses a sliding moving window over such a flow to generate multiple frames of packets. The experimentation results, displayed by Receiver Operating Characteristic (ROC) curves, have shown that the method is promising to differentiate normal and CSC traffic packet streams. Furthermore the delay of raising an alert is analyzed for CSC messages to show its efficiency.
We consider the problem of covert communication over a state-dependent channel, where the transmitter has non-causal knowledge of the channel states. Here, “covert” means that the probability that a warden on the channel can detect the communication must be small. In contrast with traditional models without noncausal channel-state information at the transmitter, we show that covert communication can be possible with positive rate. We derive closed-form formulas for the maximum achievable covert communication rate (“covert capacity”) in this setting for discrete memoryless channels as well as additive white Gaussian noise channels. We also derive lower bounds on the rate of the secret key that is needed for the transmitter and the receiver to achieve the covert capacity.
Encryption is often not sufficient to secure communication, since it does not hide that communication takes place or who is communicating with whom. Covert channels hide the very existence of communication enabling individuals to communicate secretly. Previous work proposed a covert channel hidden inside multi-player first person shooter online game traffic (FPSCC). FPSCC has a low bit rate, but it is practically impossible to eliminate other than by blocking the overt game trac. This paper shows that with knowledge of the channel’s encoding and using machine learning techniques, FPSCC can be detected with an accuracy of 95% or higher.
Provenance counterfeit and packet loss assaults are measured as threats in the large scale wireless sensor networks which are engaged for diverse application domains. The assortments of information source generate necessitate promising the reliability of information such as only truthful information is measured in the decision procedure. Details about the sensor nodes play an major role in finding trust value of sensor nodes. In this paper, a novel lightweight secure provenance method is initiated for improving the security of provenance data transmission. The anticipated system comprises provenance authentication and renovation at the base station by means of Merkle-Hellman knapsack algorithm based protected provenance encoding in the Bloom filter framework. Side Channel Monitoring (SCM) is exploited for noticing the presence of selfish nodes and packet drop behaviors. This lightweight secure provenance method decreases the energy and bandwidth utilization with well-organized storage and secure data transmission. The investigational outcomes establishes the efficacy and competence of the secure provenance secure system by professionally noticing provenance counterfeit and packet drop assaults which can be seen from the assessment in terms of provenance confirmation failure rate, collection error, packet drop rate, space complexity, energy consumption, true positive rate, false positive rate and packet drop attack detection.
Integrating security testing into the workflow of software developers not only can save resources for separate security testing but also reduce the cost of fixing security vulnerabilities by detecting them early in the development cycle. We present an automatic testing approach to detect a common type of Cross Site Scripting (XSS) vulnerability caused by improper encoding of untrusted data. We automatically extract encoding functions used in a web application to sanitize untrusted inputs and then evaluate their effectiveness by automatically generating XSS attack strings. Our evaluations show that this technique can detect 0-day XSS vulnerabilities that cannot be found by static analysis tools. We will also show that our approach can efficiently cover a common type of XSS vulnerability. This approach can be generalized to test for input validation against other types injections such as command line injection.
Video surveillance has been widely adopted to ensure home security in recent years. Most video encoding standards such as H.264 and MPEG-4 compress the temporal redundancy in a video stream using difference coding, which only encodes the residual image between a frame and its reference frame. Difference coding can efficiently compress a video stream, but it causes side-channel information leakage even though the video stream is encrypted, as reported in this paper. Particularly, we observe that the traffic patterns of an encrypted video stream are different when a user conducts different basic activities of daily living, which must be kept private from third parties as obliged by HIPAA regulations. We also observe that by exploiting this side-channel information leakage, attackers can readily infer a user's basic activities of daily living based on only the traffic size data of an encrypted video stream. We validate such an attack using two off-the-shelf cameras, and the results indicate that the user's basic activities of daily living can be recognized with a high accuracy.
Compression is desirable for network applications as it saves bandwidth. Differently, when data is compressed before being encrypted, the amount of compression leaks information about the amount of redundancy in the plaintext. This side channel has led to the “Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext (BREACH)” attack on web traffic protected by the TLS protocol. The general guidance to prevent this attack is to disable HTTP compression, preserving confidentiality but sacrificing bandwidth. As a more sophisticated countermeasure, fixed-dictionary compression was introduced in 2015 enabling compression while protecting high-value secrets, such as cookies, from attacks. The fixed-dictionary compression method is a cryptographically sound countermeasure against the BREACH attack, since it is proven secure in a suitable security model. In this project, we integrate the fixed-dictionary compression method as a countermeasure for BREACH attack, for real-world client-server setting. Further, we measure the performance of the fixed-dictionary compression algorithm against the DEFLATE compression algorithm. The results evident that, it is possible to save some amount of bandwidth, with reasonable compression/decompression time compared to DEFLATE operations. The countermeasure is easy to implement and deploy, hence, this would be a possible direction to mitigate the BREACH attack efficiently, rather than stripping off the HTTP compression entirely.
In recent years, binary coding techniques are becoming increasingly popular because of their high efficiency in handling large-scale computer vision applications. It has been demonstrated that supervised binary coding techniques that leverage supervised information can significantly enhance the coding quality, and hence greatly benefit visual search tasks. Typically, a modern binary coding method seeks to learn a group of coding functions which compress data samples into binary codes. However, few methods pursued the coding functions such that the precision at the top of a ranking list according to Hamming distances of the generated binary codes is optimized. In this paper, we propose a novel supervised binary coding approach, namely Top Rank Supervised Binary Coding (Top-RSBC), which explicitly focuses on optimizing the precision of top positions in a Hamming-distance ranking list towards preserving the supervision information. The core idea is to train the disciplined coding functions, by which the mistakes at the top of a Hamming-distance ranking list are penalized more than those at the bottom. To solve such coding functions, we relax the original discrete optimization objective with a continuous surrogate, and derive a stochastic gradient descent to optimize the surrogate objective. To further reduce the training time cost, we also design an online learning algorithm to optimize the surrogate objective more efficiently. Empirical studies based upon three benchmark image datasets demonstrate that the proposed binary coding approach achieves superior image search accuracy over the state-of-the-arts.
There are more and more systems using mobile devices to perform sensing tasks, but these increase the risk of leakage of personal privacy and data. Data hiding is one of the important ways for information security. Even though many data hiding algorithms have worked on providing more hiding capacity or higher PSNR, there are few algorithms that can control PSNR effectively while ensuring hiding capacity. In this paper, with controllable PSNR based on LSBs substitution- PSNR-Controllable Data Hiding (PCDH), we first propose a novel encoding plan for data hiding. In PCDH, we use the remainder algorithm to calculate the hidden information, and hide the secret information in the last x LSBs of every pixel. Theoretical proof shows that this method can control the variation of stego image from cover image, and control PSNR by adjusting parameters in the remainder calculation. Then, we design the encoding and decoding algorithms with low computation complexity. Experimental results show that PCDH can control the PSNR in a given range while ensuring high hiding capacity. In addition, it can resist well some steganalysis. Compared to other algorithms, PCDH achieves better tradeoff among PSNR, hiding capacity, and computation complexity.
Soft microprocessors are vital components of many embedded FPGA systems. As the application domain for FPGAs expands, the security of the software used by soft processors increases in importance. Although software confidentiality approaches (e.g. encryption) are effective, code obfuscation is known to be an effective enhancement that further deters code understanding for attackers. The availability of specialization in FPGAs provides a unique opportunity for code obfuscation on a per-application basis with minimal hardware overhead. In this paper we describe a new technique to obfuscate soft microprocessor code which is located outside the FPGA chip in an unprotected area. Our approach provides customizable, data-dependent control flow modification to make it difficult for attackers to easily understand program behavior. The application of the approach to three benchmarks illustrates a control flow cyclomatic complexity increase of about 7× with a modest logic overhead for the soft processor.
The main problem in designing effective code obfuscation is to guarantee security. State of the art obfuscation techniques rely on an unproven concept of security, and therefore are not regarded as provably secure. In this paper, we undertake a theoretical investigation of code obfuscation security based on Kolmogorov complexity and algorithmic mutual information. We introduce a new definition of code obfuscation that requires the algorithmic mutual information between a code and its obfuscated version to be minimal, allowing for controlled amount of information to be leaked to an adversary. We argue that our definition avoids the impossibility results of Barak et al. and is more advantageous then obfuscation indistinguishability definition in the sense it is more intuitive, and is algorithmic rather than probabilistic.
This paper considers the two-user interference relay channel where each source wishes to communicate to its destination a message that is confidential from the other destination. Furthermore, the relay, that is the enabler of communication, due to the absence of direct links, is untrusted. Thus, the messages from both sources need to be kept secret from the relay as well. We provide an achievable secure rate region for this network. The achievability scheme utilizes structured codes for message transmission, cooperative jamming and scaled compute-and-forward. In particular, the sources use nested lattice codes and stochastic encoding, while the destinations jam using lattice points. The relay decodes two integer combinations of the received lattice points and forwards, using Gaussian codewords, to both destinations. The achievability technique provides the insight that we can utilize the untrusted relay node as an encryption block in a two-hop interference relay channel with confidential messages.
The speedy advancement in computer hardware has caused data encryption to no longer be a 100% safe solution for secure communications. To battle with adversaries, a countermeasure is to avoid message routing through certain insecure areas, e.g., Malicious countries and nodes. To this end, avoidance routing has been proposed over the past few years. However, the existing avoidance protocols are single-path-based, which means that there must be a safe path such that no adversary is in the proximity of the whole path. This condition is difficult to satisfy. As a result, routing opportunities based on the existing avoidance schemes are limited. To tackle this issue, we propose an avoidance routing framework, namely Multi-Path Avoidance Routing (MPAR). In our approach, a source node first encodes a message into k different pieces, and each piece is sent via k different paths. The destination can assemble the original message easily, while an adversary cannot recover the original message unless she obtains all the pieces. We prove that the coding scheme achieves perfect secrecy against eavesdropping under the condition that an adversary has incomplete information regarding the message. The simulation results validate that the proposed MPAR protocol achieves its design goals.
Data security has always been a major concern and a huge challenge for governments and individuals throughout the world since early times. Recent advances in technology, such as the introduction of cloud computing, make it even a bigger challenge to keep data secure. In parallel, high throughput mobile devices such as smartphones and tablets are designed to support these new technologies. The high throughput requires power-efficient designs to maintain the battery-life. In this paper, we propose a novel Joint Security and Advanced Low Density Parity Check (LDPC) Coding (JSALC) method. The JSALC is composed of two parts: the Joint Security and Advanced LDPC-based Encryption (JSALE) and the dual-step Secure LDPC code for Channel Coding (SLCC). The JSALE is obtained by interlacing Advanced Encryption System (AES)-like rounds and Quasi-Cyclic (QC)-LDPC rows into a single primitive. Both the JSALE code and the SLCC code share the same base quasi-cyclic parity check matrix (PCM) which retains the power efficiency compared to conventional systems. We show that the overall JSALC Frame-Error-Rate (FER) performance outperforms other cryptcoding methods by over 1.5 dB while maintaining the AES-128 security level. Moreover, the JSALC enables error resilience and has higher diffusion than AES-128.
Traditional encryption techniques require packet overhead, produce processing time delay, and suffer from severe quality of service deterioration due to fades and interference in wireless channels. These issues reduce the effective transmission data rate (throughput) considerably in wireless communications, where data rate with limited bandwidth is the main constraint. In this paper, performance evaluation analyses are conducted for an integrated signaling-encryption mechanism that is secure and enables improved throughput and probability of bit-error in wireless channels. This mechanism eliminates the drawbacks stated herein by encrypting only a small portion of an entire transmitted frame, while the rest is not subject to traditional encryption but goes through a signaling process (designed transformation) with the plaintext of the portion selected for encryption. We also propose to incorporate error correction coding solely on the small encrypted portion of the data to drastically improve the overall bit-error rate performance while not noticeably increasing the required bit-rate. We focus on validating the signaling-encryption mechanism utilizing Hamming and convolutional error correction coding by conducting an end-to-end system-level simulation-based study. The average probability of bit-error and throughput of the encryption mechanism are evaluated over standard Gaussian and Rayleigh fading-type channels and compared to the ones of the conventional advanced encryption standard (AES).
Compression, encryption, encoding and modulation at the transmitter side and reverse process at the receiver side are the major processes in any wireless communication system. All these steps were carried out separately before. But, in 1978 R. J. McEliece had proposed the concept of combining security and channel encoding techniques together. Many schemes are proposed by different researchers for this combine approach. Sharing the information securely, but at the same time maintaining acceptable bit error rate in such combine system is difficult. In this paper, a new technique for robust and secure wireless transmission of image combining Turbo Product Code (TPC) with chaotic encryption is proposed. Logistic map is used for chaotic encryption and TPC for channel encoding. Simulation results for this combined system are analyzed and it shows that TPC and chaotic combination gives secure transmission with acceptable data rate.
This paper addresses the minimum transmission broadcast (MTB) problem for the many-to-all scenario in wireless multihop networks and presents a network-coding broadcast protocol with priority-based deadlock prevention. Our main contributions are as follows: First, we relate the many-to-all-with-network-coding MTB problem to a maximum out-degree problem. The solution of the latter can serve as a lower bound for the number of transmissions. Second, we propose a distributed network-coding broadcast protocol, which constructs efficient broadcast trees and dictates nodes to transmit packets in a network coding manner. Besides, we present the priority-based deadlock prevention mechanism to avoid deadlocks. Simulation results confirm that compared with existing protocols in the literature and the performance bound we present, our proposed network-coding broadcast protocol performs very well in terms of the number of transmissions.
In this paper, we propose an accumulated loss recovery algorithm on overlay multicast system using Fountain codes. Fountain code successfully decodes the packet loss, but it is weak in accumulated losses on multicast tree. The proposed algorithm overcomes an accumulated loss and significantly reduces delay on overlay multicast tree.
Modern storage systems stripe redundant data across multiple nodes to provide availability guarantees against node failures. One form of data redundancy is based on XOR-based erasure codes, which use only XOR operations for encoding and decoding. In addition to tolerating failures, a storage system must also provide fast failure recovery to reduce the window of vulnerability. This work addresses the problem of speeding up the recovery of a single-node failure for general XOR-based erasure codes. We propose a replace recovery algorithm, which uses a hill-climbing technique to search for a fast recovery solution, such that the solution search can be completed within a short time period. We further extend the algorithm to adapt to the scenario where nodes have heterogeneous capabilities (e.g., processing power and transmission bandwidth). We implement our replace recovery algorithm atop a parallelized architecture to demonstrate its feasibility. We conduct experiments on a networked storage system testbed, and show that our replace recovery algorithm uses less recovery time than the conventional recovery approach.
In modern parallel storage systems (e.g., cloud storage and data centers), it is important to provide data availability guarantees against disk (or storage node) failures via redundancy coding schemes. One coding scheme is X-code, which is double-fault tolerant while achieving the optimal update complexity. When a disk/node fails, recovery must be carried out to reduce the possibility of data unavailability. We propose an X-code-based optimal recovery scheme called minimum-disk-read-recovery (MDRR), which minimizes the number of disk reads for single-disk failure recovery. We make several contributions. First, we show that MDRR provides optimal single-disk failure recovery and reduces about 25 percent of disk reads compared to the conventional recovery approach. Second, we prove that any optimal recovery scheme for X-code cannot balance disk reads among different disks within a single stripe in general cases. Third, we propose an efficient logical encoding scheme that issues balanced disk read in a group of stripes for any recovery algorithm (including the MDRR scheme). Finally, we implement our proposed recovery schemes and conduct extensive testbed experiments in a networked storage system prototype. Experiments indicate that MDRR reduces around 20 percent of recovery time of the conventional approach, showing that our theoretical findings are applicable in practice.