Visible to the public Biblio

Found 1897 results

Filters: Keyword is compositionality  [Clear All Filters]
2018-01-10
Zhang, Y., Wang, L., You, Y., Yi, L..  2017.  A Remote-Attestation-Based Extended Hash Algorithm for Privacy Protection. 2017 International Conference on Computer Network, Electronic and Automation (ICCNEA). :254–257.

Compared to other remote attestation methods, the binary-based approach is the most direct and complete one, but privacy protection has become an important problem. In this paper, we presented an Extended Hash Algorithm (EHA) for privacy protection based on remote attestation method. Based on the traditional Merkle Hash Tree, EHA altered the algorithm of node connection. The new algorithm could ensure the same result in any measure order. The security key is added when the node connection calculation is performed, which ensures the security of the value calculated by the Merkle node. By the final analysis, we can see that the remote attestation using EHA has better privacy protection and execution performance compared to other methods.

Kogan, Dmitry, Manohar, Nathan, Boneh, Dan.  2017.  T/Key: Second-Factor Authentication From Secure Hash Chains. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :983–999.

Time-based one-time password (TOTP) systems in use today require storing secrets on both the client and the server. As a result, an attack on the server can expose all second factors for all users in the system. We present T/Key, a time-based one-time password system that requires no secrets on the server. Our work modernizes the classic S/Key system and addresses the challenges in making such a system secure and practical. At the heart of our construction is a new lower bound analyzing the hardness of inverting hash chains composed of independent random functions, which formalizes the security of this widely used primitive. Additionally, we develop a near-optimal algorithm for quickly generating the required elements in a hash chain with little memory on the client. We report on our implementation of T/Key as an Android application. T/Key can be used as a replacement for current TOTP systems, and it remains secure in the event of a server-side compromise. The cost, as with S/Key, is that one-time passwords are longer than the standard six characters used in TOTP.

Shen, Fumin, Gao, Xin, Liu, Li, Yang, Yang, Shen, Heng Tao.  2017.  Deep Asymmetric Pairwise Hashing. Proceedings of the 2017 ACM on Multimedia Conference. :1522–1530.
Recently, deep neural networks based hashing methods have greatly improved the multimedia retrieval performance by simultaneously learning feature representations and binary hash functions. Inspired by the latest advance in the asymmetric hashing scheme, in this work, we propose a novel Deep Asymmetric Pairwise Hashing approach (DAPH) for supervised hashing. The core idea is that two deep convolutional models are jointly trained such that their output codes for a pair of images can well reveal the similarity indicated by their semantic labels. A pairwise loss is elaborately designed to preserve the pairwise similarities between images as well as incorporating the independence and balance hash code learning criteria. By taking advantage of the flexibility of asymmetric hash functions, we devise an efficient alternating algorithm to optimize the asymmetric deep hash functions and high-quality binary code jointly. Experiments on three image benchmarks show that DAPH achieves the state-of-the-art performance on large-scale image retrieval.
Andoni, Alexandr, Razenshteyn, Ilya, Nosatzki, Negev Shekel.  2017.  LSH Forest: Practical Algorithms Made Theoretical. Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms. :67–78.
We analyze LSH Forest [BCG05]—a popular heuristic for the nearest neighbor search—and show that a careful yet simple modification of it outperforms "vanilla" LSH algorithms. The end result is the first instance of a simple, practical algorithm that provably leverages data-dependent hashing to improve upon data-oblivious LSH. Here is the entire algorithm for the d-dimensional Hamming space. The LSH Forest, for a given dataset, applies a random permutation to all the d coordinates, and builds a trie on the resulting strings. In our modification, we further augment this trie: for each node, we store a constant number of points close to the mean of the corresponding subset of the dataset, which are compared to any query point reaching that node. The overall data structure is simply several such tries sampled independently. While the new algorithm does not quantitatively improve upon the best data-dependent hashing algorithms from [AR15] (which are known to be optimal), it is significantly simpler, being based on a practical heuristic, and is provably better than the best LSH algorithm for the Hamming space [IM98, HIM12].
Alwen, Joel, Blocki, Jeremiah, Harsha, Ben.  2017.  Practical Graphs for Optimal Side-Channel Resistant Memory-Hard Functions. Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. :1001–1017.
A memory-hard function (MHF) ƒn with parameter n can be computed in sequential time and space n. Simultaneously, a high amortized parallel area-time complexity (aAT) is incurred per evaluation. In practice, MHFs are used to limit the rate at which an adversary (using a custom computational device) can evaluate a security sensitive function that still occasionally needs to be evaluated by honest users (using an off-the-shelf general purpose device). The most prevalent examples of such sensitive functions are Key Derivation Functions (KDFs) and password hashing algorithms where rate limits help mitigate off-line dictionary attacks. As the honest users' inputs to these functions are often (low-entropy) passwords special attention is given to a class of side-channel resistant MHFs called iMHFs. Essentially all iMHFs can be viewed as some mode of operation (making n calls to some round function) given by a directed acyclic graph (DAG) with very low indegree. Recently, a combinatorial property of a DAG has been identified (called "depth-robustness") which results in good provable security for an iMHF based on that DAG. Depth-robust DAGs have also proven useful in other cryptographic applications. Unfortunately, up till now, all known very depth-robust DAGs are impractically complicated and little is known about their exact (i.e. non-asymptotic) depth-robustness both in theory and in practice. In this work we build and analyze (both formally and empirically) several exceedingly simple and efficient to navigate practical DAGs for use in iMHFs and other applications. For each DAG we: Prove that their depth-robustness is asymptotically maximal. Prove bounds of at least 3 orders of magnitude better on their exact depth-robustness compared to known bounds for other practical iMHF. Implement and empirically evaluate their depth-robustness and aAT against a variety of state-of-the art (and several new) depth-reduction and low aAT attacks. We find that, against all attacks, the new DAGs perform significantly better in practice than Argon2i, the most widely deployed iMHF in practice. Along the way we also improve the best known empirical attacks on the aAT of Argon2i by implementing and testing several heuristic versions of a (hitherto purely theoretical) depth-reduction attack. Finally, we demonstrate practicality of our constructions by modifying the Argon2i code base to use one of the new high aAT DAGs. Experimental benchmarks on a standard off-the-shelf CPU show that the new modifications do not adversely affect the impressive throughput of Argon2i (despite seemingly enjoying significantly higher aAT).
Bai, Jiale, Ni, Bingbing, Wang, Minsi, Shen, Yang, Lai, Hanjiang, Zhang, Chongyang, Mei, Lin, Hu, Chuanping, Yao, Chen.  2017.  Deep Progressive Hashing for Image Retrieval. Proceedings of the 2017 ACM on Multimedia Conference. :208–216.

This paper proposes a novel recursive hashing scheme, in contrast to conventional "one-off" based hashing algorithms. Inspired by human's "nonsalient-to-salient" perception path, the proposed hashing scheme generates a series of binary codes based on progressively expanded salient regions. Built on a recurrent deep network, i.e., LSTM structure, the binary codes generated from later output nodes naturally inherit information aggregated from previously codes while explore novel information from the extended salient region, and therefore it possesses good scalability property. The proposed deep hashing network is trained via minimizing a triplet ranking loss, which is end-to-end trainable. Extensive experimental results on several image retrieval benchmarks demonstrate good performance gain over state-of-the-art image retrieval methods and its scalability property.

Hu, Qinghao, Wu, Jiaxiang, Cheng, Jian, Wu, Lifang, Lu, Hanqing.  2017.  Pseudo Label Based Unsupervised Deep Discriminative Hashing for Image Retrieval. Proceedings of the 2017 ACM on Multimedia Conference. :1584–1590.

Hashing methods play an important role in large scale image retrieval. Traditional hashing methods use hand-crafted features to learn hash functions, which can not capture the high level semantic information. Deep hashing algorithms use deep neural networks to learn feature representation and hash functions simultaneously. Most of these algorithms exploit supervised information to train the deep network. However, supervised information is expensive to obtain. In this paper, we propose a pseudo label based unsupervised deep discriminative hashing algorithm. First, we cluster images via K-means and the cluster labels are treated as pseudo labels. Then we train a deep hashing network with pseudo labels by minimizing the classification loss and quantization loss. Experiments on two datasets demonstrate that our unsupervised deep discriminative hashing method outperforms the state-of-art unsupervised hashing methods.

Yu, Ye, Belazzougui, Djamal, Qian, Chen, Zhang, Qin.  2017.  A Fast, Small, and Dynamic Forwarding Information Base. Proceedings of the 2017 ACM SIGMETRICS / International Conference on Measurement and Modeling of Computer Systems. :41–42.
Concise is a Forwarding information base (FIB) design that uses very little memory to support fast query of a large number of dynamic network names or flow IDs. Concise makes use of minimal perfect hashing and the SDN framework to design and implement the data structure, protocols, and system. Experimental results show that Concise uses significantly smaller memory to achieve faster query speed compared to existing FIB solutions and it can be updated very efficiently.
2017-12-28
Henretty, T., Baskaran, M., Ezick, J., Bruns-Smith, D., Simon, T. A..  2017.  A quantitative and qualitative analysis of tensor decompositions on spatiotemporal data. 2017 IEEE High Performance Extreme Computing Conference (HPEC). :1–7.

Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (\textbackslashtextless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- \textbackslashtextbar\textbackslashtextbar

Danesh, W., Rahman, M..  2017.  Linear regression based multi-state logic decomposition approach for efficient hardware implementation. 2017 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH). :153–154.

Multi-state logic presents a promising avenue for more-than-Moore scaling, since efficient implementation of multi-valued logic (MVL) can significantly reduce switching and interconnection requirements and result in significant benefits compared to binary CMOS. So far, traditional approaches lag behind binary CMOS due to: (a) reliance on logic decomposition approaches [4][5][6] that result in many multi-valued minterms [4], complex polynomials [5], and decision diagrams [6], which are difficult to implement, and (b) emulation of multi-valued computation and communication through binary switches and medium that require data conversion, and large circuits. In this paper, we propose a fundamentally different approach for MVL decomposition, merging concepts from data science and nanoelectronics to tackle the problems, (a) First, we do linear regression on all inputs and outputs of a multivalued function, and find an expression that fits most input and output combinations. For unmatched combinations, we do successive regressions to find linear expressions. Next, using our novel visual pattern matching technique, we find conditions based on input and output conditions to select each expression. These expressions along with associated selection criteria ensure that for all possible inputs of a specific function, correct output can be reached. Our selection of regression model to find linear expressions, coefficients and conditions allow efficient hardware implementation. We discuss an approach for solving problem (b) and show an example of quaternary sum circuit. Our estimates show 65.6% saving of switching components compared with a 4-bit CMOS adder.

Rolinger, T. B., Simon, T. A., Krieger, C. D..  2017.  Performance challenges for heterogeneous distributed tensor decompositions. 2017 IEEE High Performance Extreme Computing Conference (HPEC). :1–7.

Tensor decompositions, which are factorizations of multi-dimensional arrays, are becoming increasingly important in large-scale data analytics. A popular tensor decomposition algorithm is Canonical Decomposition/Parallel Factorization using alternating least squares fitting (CP-ALS). Tensors that model real-world applications are often very large and sparse, driving the need for high performance implementations of decomposition algorithms, such as CP-ALS, that can take advantage of many types of compute resources. In this work we present ReFacTo, a heterogeneous distributed tensor decomposition implementation based on DeFacTo, an existing distributed memory approach to CP-ALS. DFacTo reduces the critical routine of CP-ALS to a series of sparse matrix-vector multiplications (SpMVs). ReFacTo leverages GPUs within a cluster via MPI to perform these SpMVs and uses OpenMP threads to parallelize other routines. We evaluate the performance of ReFacTo when using NVIDIA's GPU-based cuSPARSE library and compare it to an alternative implementation that uses Intel's CPU-based Math Kernel Library (MKL) for the SpMV. Furthermore, we provide a discussion of the performance challenges of heterogeneous distributed tensor decompositions based on the results we observed. We find that on up to 32 nodes, the SpMV of ReFacTo when using MKL is up to 6.8× faster than ReFacTo when using cuSPARSE.

Maslovskiy, A., Kolchigin, N., Legenkiy, M., Antyufeyeva, M..  2017.  Decomposition method for complex target RCS measuring. 2017 IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON). :156–159.

In this paper a method of monostatic RCS measuring in real conditions for complex shaped objects is proposed. The basic idea of the method is to provide measuring in near field zone for different parts of the object (fragments) separately. This technique is titled "decomposition method". After such measurements all RCS data are summed and one can obtain the average RCS of investigated object. Such method is much more accessible in comparison with natural measurements in far field zone. In this paper the decomposition method is tested numerically. For this a model of complex shape object (tank T-90) is divided into the fragments for some direction of view. It is shown that the sum of RCS of the fragments is close to the full object RCS for corresponding direction.

El-Khamy, S. E., Korany, N. O., El-Sherif, M. H..  2017.  Correlation based highly secure image hiding in audio signals using wavelet decomposition and chaotic maps hopping for 5G multimedia communications. 2017 XXXIInd General Assembly and Scientific Symposium of the International Union of Radio Science (URSI GASS). :1–3.

Audio Steganography is the technique of hiding any secret information behind a cover audio file without impairing its quality. Data hiding in audio signals has various applications such as secret communications and concealing data that may influence the security and safety of governments and personnel and has possible important applications in 5G communication systems. This paper proposes an efficient secure steganography scheme based on the high correlation between successive audio signals. This is similar to the case of differential pulse coding modulation technique (DPCM) where encoding uses the redundancy in sample values to encode the signals with lower bit rate. Discrete Wavelet Transform (DWT) of audio samples is used to store hidden data in the least important coefficients of Haar transform. We use the benefit of the small differences between successive samples generated from encoding of the cover audio signal wavelet coefficients to hide image data without making a remarkable change in the cover audio signal. instead of changing of actual audio samples so this doesn't perceptually degrade the audio signal and provides higher hiding capacity with lower distortion. To further increase the security of the image hiding process, the image to be hidden is divided into blocks and the bits of each block are XORed with a different random sequence of logistic maps using hopping technique. The performance of the proposed algorithm has been estimated extensively against attacks and experimental results show that the proposed method achieves good robustness and imperceptibility.

Vu, Q. H., Ruta, D., Cen, L..  2017.  An ensemble model with hierarchical decomposition and aggregation for highly scalable and robust classification. 2017 Federated Conference on Computer Science and Information Systems (FedCSIS). :149–152.

This paper introduces an ensemble model that solves the binary classification problem by incorporating the basic Logistic Regression with the two recent advanced paradigms: extreme gradient boosted decision trees (xgboost) and deep learning. To obtain the best result when integrating sub-models, we introduce a solution to split and select sets of features for the sub-model training. In addition to the ensemble model, we propose a flexible robust and highly scalable new scheme for building a composite classifier that tries to simultaneously implement multiple layers of model decomposition and outputs aggregation to maximally reduce both bias and variance (spread) components of classification errors. We demonstrate the power of our ensemble model to solve the problem of predicting the outcome of Hearthstone, a turn-based computer game, based on game state information. Excellent predictive performance of our model has been acknowledged by the second place scored in the final ranking among 188 competing teams.

Nair, A. S., Ranganathan, P., Kaabouch, N..  2017.  A constrained topological decomposition method for the next-generation smart grid. 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–6.

The inherent heterogeneity in the uncertainty of variable generations (e.g., wind, solar, tidal and wave-power) in electric grid coupled with the dynamic nature of distributed architecture of sub-systems, and the need for information synchronization has made the problem of resource allocation and monitoring a tremendous challenge for the next-generation smart grid. Unfortunately, the deployment of distributed algorithms across micro grids have been overlooked in the electric grid sector. In particular, centralized methods for managing resources and data may not be sufficient to monitor a complex electric grid. This paper discusses a decentralized constrained decomposition using Linear Programming (LP) that optimizes the inter-area transfer across micro grids that reduces total generation cost for the grid. A test grid of IEEE 14-bus system is sectioned into two and three areas, and its effect on inter-transfer is analyzed.

Godfrey, L. B., Gashler, M. S..  2017.  Neural decomposition of time-series data. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :2796–2801.

We present a neural network technique for the analysis and extrapolation of time-series data called Neural Decomposition (ND). Units with a sinusoidal activation function are used to perform a Fourier-like decomposition of training samples into a sum of sinusoids, augmented by units with nonperiodic activation functions to capture linear trends and other nonperiodic components. We show how careful weight initialization can be combined with regularization to form a simple model that generalizes well. Our method generalizes effectively on the Mackey-Glass series, a dataset of unemployment rates as reported by the U.S. Department of Labor Statistics, a time-series of monthly international airline passengers, and an unevenly sampled time-series of oxygen isotope measurements from a cave in north India. We find that ND outperforms popular time-series forecasting techniques including LSTM, echo state networks, (S)ARIMA, and SVR with a radial basis function.

Kabi, B., Sahadevan, A. S., Pradhan, T..  2017.  An overflow free fixed-point eigenvalue decomposition algorithm: Case study of dimensionality reduction in hyperspectral images. 2017 Conference on Design and Architectures for Signal and Image Processing (DASIP). :1–9.

We consider the problem of enabling robust range estimation of eigenvalue decomposition (EVD) algorithm for a reliable fixed-point design. The simplicity of fixed-point circuitry has always been so tempting to implement EVD algorithms in fixed-point arithmetic. Working towards an effective fixed-point design, integer bit-width allocation is a significant step which has a crucial impact on accuracy and hardware efficiency. This paper investigates the shortcomings of the existing range estimation methods while deriving bounds for the variables of the EVD algorithm. In light of the circumstances, we introduce a range estimation approach based on vector and matrix norm properties together with a scaling procedure that maintains all the assets of an analytical method. The method could derive robust and tight bounds for the variables of EVD algorithm. The bounds derived using the proposed approach remain same for any input matrix and are also independent of the number of iterations or size of the problem. Some benchmark hyperspectral data sets have been used to evaluate the efficiency of the proposed technique. It was found that by the proposed range estimation approach, all the variables generated during the computation of Jacobi EVD is bounded within ±1.

Zamani, S., Nanjundaswamy, T., Rose, K..  2017.  Frequency domain singular value decomposition for efficient spatial audio coding. 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). :126–130.

Advances in virtual reality have generated substantial interest in accurately reproducing and storing spatial audio in the higher order ambisonics (HOA) representation, given its rendering flexibility. Recent standardization for HOA compression adopted a framework wherein HOA data are decomposed into principal components that are then encoded by standard audio coding, i.e., frequency domain quantization and entropy coding to exploit psychoacoustic redundancy. A noted shortcoming of this approach is the occasional mismatch in principal components across blocks, and the resulting suboptimal transitions in the data fed to the audio coder. Instead, we propose a framework where singular value decomposition (SVD) is performed after transformation to the frequency domain via the modified discrete cosine transform (MDCT). This framework not only ensures smooth transition across blocks, but also enables frequency dependent SVD for better energy compaction. Moreover, we introduce a novel noise substitution technique to compensate for suppressed ambient energy in discarded higher order ambisonics channels, which significantly enhances the perceptual quality of the reconstructed HOA signal. Objective and subjective evaluation results provide evidence for the effectiveness of the proposed framework in terms of both higher compression gains and better perceptual quality, compared to existing methods.

Guo, J., Li, Z..  2017.  A Mean-Covariance Decomposition Modeling Method for Battery Capacity Prognostics. 2017 International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC). :549–556.

Lithium Ion batteries usually degrade to an unacceptable capacity level after hundreds or even thousands of cycles. The continuously observed capacity fade data over time and their internal structure can be informative for constructing capacity fade models. This paper applies a mean-covariance decomposition modeling method to analyze the capacity fade data. The proposed approach directly examines the variances and correlations in data of interest and express the correlation matrix in hyper-spherical coordinates using angles and trigonometric functions. The proposed method is applied to model and predict key batteries performance metrics using testing data under various testing conditions.

Shahbazi, M., Lee, J., Caldwell, D., Tsagarakis, N..  2017.  Inverse dynamics control of bimanual object manipulation using orthogonal decomposition: An analytic approach. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :4791–4796.

In this paper, the well-known problem of codependence between inverse dynamics torque and contact force in bimanual object manipulation is addressed. The common contact constraint, namely rigid grasping, is exploited to decompose the set of dynamics equations into two orthogonally decoupled sets. Subsequently, the inverse dynamics control is formulated in a sub-manifold that is independent of the contact force, leading to analytically correct solutions that do not need to resort to common approximations for the aforementioned codependence problem. The contact force is also analytically computed and, therefore, can be optimally distributed using the torque redundancy. Relying on this prediction is most significant in situations where a force sensor at the end-effector is not present or is faulty. Even in the availability of sensory data, the predicted force may be used to correct typically noisy or delayed when filtered measurements, resulting in improved robustness. Simulation experiments on a planar bimanual manipulation model are presented.

2017-12-12
Taylor, J. M., Sharif, H. R..  2017.  Security challenges and methods for protecting critical infrastructure cyber-physical systems. 2017 International Conference on Selected Topics in Mobile and Wireless Networking (MoWNeT). :1–6.

Cyber-Physical Systems (CPS) represent a fundamental link between information technology (IT) systems and the devices that control industrial production and maintain critical infrastructure services that support our modern world. Increasingly, the interconnections among CPS and IT systems have created exploitable security vulnerabilities due to a number of factors, including a legacy of weak information security applications on CPS and the tendency of CPS operators to prioritize operational availability at the expense of integrity and confidentiality. As a result, CPS are subject to a number of threats from cyber attackers and cyber-physical attackers, including denial of service and even attacks against the integrity of the data in the system. The effects of these attacks extend beyond mere loss of data or the inability to access information system services. Attacks against CPS can cause physical damage in the real world. This paper reviews the challenges of providing information assurance services for CPS that operate critical infrastructure systems and industrial control systems. These methods are thorough measures to close integrity and confidentiality gaps in CPS and processes to highlight the security risks that remain. This paper also outlines approaches to reduce the overhead and complexity for security methods, as well as examine novel approaches, including covert communications channels, to increase CPS security.

Thimmaraju, K., Schiff, L., Schmid, S..  2017.  Outsmarting Network Security with SDN Teleportation. 2017 IEEE European Symposium on Security and Privacy (EuroS P). :563–578.

Software-defined networking is considered a promising new paradigm, enabling more reliable and formally verifiable communication networks. However, this paper shows that the separation of the control plane from the data plane, which lies at the heart of Software-Defined Networks (SDNs), introduces a new vulnerability which we call teleportation. An attacker (e.g., a malicious switch in the data plane or a host connected to the network) can use teleportation to transmit information via the control plane and bypass critical network functions in the data plane (e.g., a firewall), and to violate security policies as well as logical and even physical separations. This paper characterizes the design space for teleportation attacks theoretically, and then identifies four different teleportation techniques. We demonstrate and discuss how these techniques can be exploited for different attacks (e.g., exfiltrating confidential data at high rates), and also initiate the discussion of possible countermeasures. Generally, and given today's trend toward more intent-based networking, we believe that our findings are relevant beyond the use cases considered in this paper.

Byrenheid, M., Rossberg, M., Schaefer, G., Dorn, R..  2017.  Covert-channel-resistant congestion control for traffic normalization in uncontrolled networks. 2017 IEEE International Conference on Communications (ICC). :1–7.

Traffic normalization, i.e. enforcing a constant stream of fixed-length packets, is a well-known measure to completely prevent attacks based on traffic analysis. In simple configurations, the enforced traffic rate can be statically configured by a human operator, but in large virtual private networks (VPNs) the traffic pattern of many connections may need to be adjusted whenever the overlay topology or the transport capacity of the underlying infrastructure changes. We propose a rate-based congestion control mechanism for automatic adjustment of traffic patterns that does not leak any information about the actual communication. Overly strong rate throttling in response to packet loss is avoided, as the control mechanism does not change the sending rate immediately when a packet loss was detected. Instead, an estimate of the current packet loss rate is obtained and the sending rate is adjusted proportionally. We evaluate our control scheme based on a measurement study in a local network testbed. The results indicate that the proposed approach avoids network congestion, enables protected TCP flows to achieve an increased goodput, and yet ensures appropriate traffic flow confidentiality.

Chow, J., Li, X., Mountrouidou, X..  2017.  Raising flags: Detecting covert storage channels using relative entropy. 2017 IEEE International Conference on Intelligence and Security Informatics (ISI). :25–30.

This paper focuses on one type of Covert Storage Channel (CSC) that uses the 6-bit TCP flag header in TCP/IP network packets to transmit secret messages between accomplices. We use relative entropy to characterize the irregularity of network flows in comparison to normal traffic. A normal profile is created by the frequency distribution of TCP flags in regular traffic packets. In detection, the TCP flag frequency distribution of network traffic is computed for each unique IP pair. In order to evaluate the accuracy and efficiency of the proposed method, this study uses real regular traffic data sets as well as CSC messages using coding schemes under assumptions of both clear text, composed by a list of keywords common in Unix systems, and encrypted text. Moreover, smart accomplices may use only those TCP flags that are ever appearing in normal traffic. Then, in detection, the relative entropy can reveal the dissimilarity of a different frequency distribution from this normal profile. We have also used different data processing methods in detection: one method summarizes all the packets for a pair of IP addresses into one flow and the other uses a sliding moving window over such a flow to generate multiple frames of packets. The experimentation results, displayed by Receiver Operating Characteristic (ROC) curves, have shown that the method is promising to differentiate normal and CSC traffic packet streams. Furthermore the delay of raising an alert is analyzed for CSC messages to show its efficiency.

Lee, S. H., Wang, L., Khisti, A., Womell, G. W..  2017.  Covert communication with noncausal channel-state information at the transmitter. 2017 IEEE International Symposium on Information Theory (ISIT). :2830–2834.

We consider the problem of covert communication over a state-dependent channel, where the transmitter has non-causal knowledge of the channel states. Here, “covert” means that the probability that a warden on the channel can detect the communication must be small. In contrast with traditional models without noncausal channel-state information at the transmitter, we show that covert communication can be possible with positive rate. We derive closed-form formulas for the maximum achievable covert communication rate (“covert capacity”) in this setting for discrete memoryless channels as well as additive white Gaussian noise channels. We also derive lower bounds on the rate of the secret key that is needed for the transmitter and the receiver to achieve the covert capacity.