Biblio
Compared to other remote attestation methods, the binary-based approach is the most direct and complete one, but privacy protection has become an important problem. In this paper, we presented an Extended Hash Algorithm (EHA) for privacy protection based on remote attestation method. Based on the traditional Merkle Hash Tree, EHA altered the algorithm of node connection. The new algorithm could ensure the same result in any measure order. The security key is added when the node connection calculation is performed, which ensures the security of the value calculated by the Merkle node. By the final analysis, we can see that the remote attestation using EHA has better privacy protection and execution performance compared to other methods.
Time-based one-time password (TOTP) systems in use today require storing secrets on both the client and the server. As a result, an attack on the server can expose all second factors for all users in the system. We present T/Key, a time-based one-time password system that requires no secrets on the server. Our work modernizes the classic S/Key system and addresses the challenges in making such a system secure and practical. At the heart of our construction is a new lower bound analyzing the hardness of inverting hash chains composed of independent random functions, which formalizes the security of this widely used primitive. Additionally, we develop a near-optimal algorithm for quickly generating the required elements in a hash chain with little memory on the client. We report on our implementation of T/Key as an Android application. T/Key can be used as a replacement for current TOTP systems, and it remains secure in the event of a server-side compromise. The cost, as with S/Key, is that one-time passwords are longer than the standard six characters used in TOTP.
This paper proposes a novel recursive hashing scheme, in contrast to conventional "one-off" based hashing algorithms. Inspired by human's "nonsalient-to-salient" perception path, the proposed hashing scheme generates a series of binary codes based on progressively expanded salient regions. Built on a recurrent deep network, i.e., LSTM structure, the binary codes generated from later output nodes naturally inherit information aggregated from previously codes while explore novel information from the extended salient region, and therefore it possesses good scalability property. The proposed deep hashing network is trained via minimizing a triplet ranking loss, which is end-to-end trainable. Extensive experimental results on several image retrieval benchmarks demonstrate good performance gain over state-of-the-art image retrieval methods and its scalability property.
Hashing methods play an important role in large scale image retrieval. Traditional hashing methods use hand-crafted features to learn hash functions, which can not capture the high level semantic information. Deep hashing algorithms use deep neural networks to learn feature representation and hash functions simultaneously. Most of these algorithms exploit supervised information to train the deep network. However, supervised information is expensive to obtain. In this paper, we propose a pseudo label based unsupervised deep discriminative hashing algorithm. First, we cluster images via K-means and the cluster labels are treated as pseudo labels. Then we train a deep hashing network with pseudo labels by minimizing the classification loss and quantization loss. Experiments on two datasets demonstrate that our unsupervised deep discriminative hashing method outperforms the state-of-art unsupervised hashing methods.
Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (\textbackslashtextless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- \textbackslashtextbar\textbackslashtextbar
Multi-state logic presents a promising avenue for more-than-Moore scaling, since efficient implementation of multi-valued logic (MVL) can significantly reduce switching and interconnection requirements and result in significant benefits compared to binary CMOS. So far, traditional approaches lag behind binary CMOS due to: (a) reliance on logic decomposition approaches [4][5][6] that result in many multi-valued minterms [4], complex polynomials [5], and decision diagrams [6], which are difficult to implement, and (b) emulation of multi-valued computation and communication through binary switches and medium that require data conversion, and large circuits. In this paper, we propose a fundamentally different approach for MVL decomposition, merging concepts from data science and nanoelectronics to tackle the problems, (a) First, we do linear regression on all inputs and outputs of a multivalued function, and find an expression that fits most input and output combinations. For unmatched combinations, we do successive regressions to find linear expressions. Next, using our novel visual pattern matching technique, we find conditions based on input and output conditions to select each expression. These expressions along with associated selection criteria ensure that for all possible inputs of a specific function, correct output can be reached. Our selection of regression model to find linear expressions, coefficients and conditions allow efficient hardware implementation. We discuss an approach for solving problem (b) and show an example of quaternary sum circuit. Our estimates show 65.6% saving of switching components compared with a 4-bit CMOS adder.
Tensor decompositions, which are factorizations of multi-dimensional arrays, are becoming increasingly important in large-scale data analytics. A popular tensor decomposition algorithm is Canonical Decomposition/Parallel Factorization using alternating least squares fitting (CP-ALS). Tensors that model real-world applications are often very large and sparse, driving the need for high performance implementations of decomposition algorithms, such as CP-ALS, that can take advantage of many types of compute resources. In this work we present ReFacTo, a heterogeneous distributed tensor decomposition implementation based on DeFacTo, an existing distributed memory approach to CP-ALS. DFacTo reduces the critical routine of CP-ALS to a series of sparse matrix-vector multiplications (SpMVs). ReFacTo leverages GPUs within a cluster via MPI to perform these SpMVs and uses OpenMP threads to parallelize other routines. We evaluate the performance of ReFacTo when using NVIDIA's GPU-based cuSPARSE library and compare it to an alternative implementation that uses Intel's CPU-based Math Kernel Library (MKL) for the SpMV. Furthermore, we provide a discussion of the performance challenges of heterogeneous distributed tensor decompositions based on the results we observed. We find that on up to 32 nodes, the SpMV of ReFacTo when using MKL is up to 6.8× faster than ReFacTo when using cuSPARSE.
In this paper a method of monostatic RCS measuring in real conditions for complex shaped objects is proposed. The basic idea of the method is to provide measuring in near field zone for different parts of the object (fragments) separately. This technique is titled "decomposition method". After such measurements all RCS data are summed and one can obtain the average RCS of investigated object. Such method is much more accessible in comparison with natural measurements in far field zone. In this paper the decomposition method is tested numerically. For this a model of complex shape object (tank T-90) is divided into the fragments for some direction of view. It is shown that the sum of RCS of the fragments is close to the full object RCS for corresponding direction.
Audio Steganography is the technique of hiding any secret information behind a cover audio file without impairing its quality. Data hiding in audio signals has various applications such as secret communications and concealing data that may influence the security and safety of governments and personnel and has possible important applications in 5G communication systems. This paper proposes an efficient secure steganography scheme based on the high correlation between successive audio signals. This is similar to the case of differential pulse coding modulation technique (DPCM) where encoding uses the redundancy in sample values to encode the signals with lower bit rate. Discrete Wavelet Transform (DWT) of audio samples is used to store hidden data in the least important coefficients of Haar transform. We use the benefit of the small differences between successive samples generated from encoding of the cover audio signal wavelet coefficients to hide image data without making a remarkable change in the cover audio signal. instead of changing of actual audio samples so this doesn't perceptually degrade the audio signal and provides higher hiding capacity with lower distortion. To further increase the security of the image hiding process, the image to be hidden is divided into blocks and the bits of each block are XORed with a different random sequence of logistic maps using hopping technique. The performance of the proposed algorithm has been estimated extensively against attacks and experimental results show that the proposed method achieves good robustness and imperceptibility.
This paper introduces an ensemble model that solves the binary classification problem by incorporating the basic Logistic Regression with the two recent advanced paradigms: extreme gradient boosted decision trees (xgboost) and deep learning. To obtain the best result when integrating sub-models, we introduce a solution to split and select sets of features for the sub-model training. In addition to the ensemble model, we propose a flexible robust and highly scalable new scheme for building a composite classifier that tries to simultaneously implement multiple layers of model decomposition and outputs aggregation to maximally reduce both bias and variance (spread) components of classification errors. We demonstrate the power of our ensemble model to solve the problem of predicting the outcome of Hearthstone, a turn-based computer game, based on game state information. Excellent predictive performance of our model has been acknowledged by the second place scored in the final ranking among 188 competing teams.
The inherent heterogeneity in the uncertainty of variable generations (e.g., wind, solar, tidal and wave-power) in electric grid coupled with the dynamic nature of distributed architecture of sub-systems, and the need for information synchronization has made the problem of resource allocation and monitoring a tremendous challenge for the next-generation smart grid. Unfortunately, the deployment of distributed algorithms across micro grids have been overlooked in the electric grid sector. In particular, centralized methods for managing resources and data may not be sufficient to monitor a complex electric grid. This paper discusses a decentralized constrained decomposition using Linear Programming (LP) that optimizes the inter-area transfer across micro grids that reduces total generation cost for the grid. A test grid of IEEE 14-bus system is sectioned into two and three areas, and its effect on inter-transfer is analyzed.
We present a neural network technique for the analysis and extrapolation of time-series data called Neural Decomposition (ND). Units with a sinusoidal activation function are used to perform a Fourier-like decomposition of training samples into a sum of sinusoids, augmented by units with nonperiodic activation functions to capture linear trends and other nonperiodic components. We show how careful weight initialization can be combined with regularization to form a simple model that generalizes well. Our method generalizes effectively on the Mackey-Glass series, a dataset of unemployment rates as reported by the U.S. Department of Labor Statistics, a time-series of monthly international airline passengers, and an unevenly sampled time-series of oxygen isotope measurements from a cave in north India. We find that ND outperforms popular time-series forecasting techniques including LSTM, echo state networks, (S)ARIMA, and SVR with a radial basis function.
We consider the problem of enabling robust range estimation of eigenvalue decomposition (EVD) algorithm for a reliable fixed-point design. The simplicity of fixed-point circuitry has always been so tempting to implement EVD algorithms in fixed-point arithmetic. Working towards an effective fixed-point design, integer bit-width allocation is a significant step which has a crucial impact on accuracy and hardware efficiency. This paper investigates the shortcomings of the existing range estimation methods while deriving bounds for the variables of the EVD algorithm. In light of the circumstances, we introduce a range estimation approach based on vector and matrix norm properties together with a scaling procedure that maintains all the assets of an analytical method. The method could derive robust and tight bounds for the variables of EVD algorithm. The bounds derived using the proposed approach remain same for any input matrix and are also independent of the number of iterations or size of the problem. Some benchmark hyperspectral data sets have been used to evaluate the efficiency of the proposed technique. It was found that by the proposed range estimation approach, all the variables generated during the computation of Jacobi EVD is bounded within ±1.
Advances in virtual reality have generated substantial interest in accurately reproducing and storing spatial audio in the higher order ambisonics (HOA) representation, given its rendering flexibility. Recent standardization for HOA compression adopted a framework wherein HOA data are decomposed into principal components that are then encoded by standard audio coding, i.e., frequency domain quantization and entropy coding to exploit psychoacoustic redundancy. A noted shortcoming of this approach is the occasional mismatch in principal components across blocks, and the resulting suboptimal transitions in the data fed to the audio coder. Instead, we propose a framework where singular value decomposition (SVD) is performed after transformation to the frequency domain via the modified discrete cosine transform (MDCT). This framework not only ensures smooth transition across blocks, but also enables frequency dependent SVD for better energy compaction. Moreover, we introduce a novel noise substitution technique to compensate for suppressed ambient energy in discarded higher order ambisonics channels, which significantly enhances the perceptual quality of the reconstructed HOA signal. Objective and subjective evaluation results provide evidence for the effectiveness of the proposed framework in terms of both higher compression gains and better perceptual quality, compared to existing methods.
Lithium Ion batteries usually degrade to an unacceptable capacity level after hundreds or even thousands of cycles. The continuously observed capacity fade data over time and their internal structure can be informative for constructing capacity fade models. This paper applies a mean-covariance decomposition modeling method to analyze the capacity fade data. The proposed approach directly examines the variances and correlations in data of interest and express the correlation matrix in hyper-spherical coordinates using angles and trigonometric functions. The proposed method is applied to model and predict key batteries performance metrics using testing data under various testing conditions.
In this paper, the well-known problem of codependence between inverse dynamics torque and contact force in bimanual object manipulation is addressed. The common contact constraint, namely rigid grasping, is exploited to decompose the set of dynamics equations into two orthogonally decoupled sets. Subsequently, the inverse dynamics control is formulated in a sub-manifold that is independent of the contact force, leading to analytically correct solutions that do not need to resort to common approximations for the aforementioned codependence problem. The contact force is also analytically computed and, therefore, can be optimally distributed using the torque redundancy. Relying on this prediction is most significant in situations where a force sensor at the end-effector is not present or is faulty. Even in the availability of sensory data, the predicted force may be used to correct typically noisy or delayed when filtered measurements, resulting in improved robustness. Simulation experiments on a planar bimanual manipulation model are presented.
Cyber-Physical Systems (CPS) represent a fundamental link between information technology (IT) systems and the devices that control industrial production and maintain critical infrastructure services that support our modern world. Increasingly, the interconnections among CPS and IT systems have created exploitable security vulnerabilities due to a number of factors, including a legacy of weak information security applications on CPS and the tendency of CPS operators to prioritize operational availability at the expense of integrity and confidentiality. As a result, CPS are subject to a number of threats from cyber attackers and cyber-physical attackers, including denial of service and even attacks against the integrity of the data in the system. The effects of these attacks extend beyond mere loss of data or the inability to access information system services. Attacks against CPS can cause physical damage in the real world. This paper reviews the challenges of providing information assurance services for CPS that operate critical infrastructure systems and industrial control systems. These methods are thorough measures to close integrity and confidentiality gaps in CPS and processes to highlight the security risks that remain. This paper also outlines approaches to reduce the overhead and complexity for security methods, as well as examine novel approaches, including covert communications channels, to increase CPS security.
Software-defined networking is considered a promising new paradigm, enabling more reliable and formally verifiable communication networks. However, this paper shows that the separation of the control plane from the data plane, which lies at the heart of Software-Defined Networks (SDNs), introduces a new vulnerability which we call teleportation. An attacker (e.g., a malicious switch in the data plane or a host connected to the network) can use teleportation to transmit information via the control plane and bypass critical network functions in the data plane (e.g., a firewall), and to violate security policies as well as logical and even physical separations. This paper characterizes the design space for teleportation attacks theoretically, and then identifies four different teleportation techniques. We demonstrate and discuss how these techniques can be exploited for different attacks (e.g., exfiltrating confidential data at high rates), and also initiate the discussion of possible countermeasures. Generally, and given today's trend toward more intent-based networking, we believe that our findings are relevant beyond the use cases considered in this paper.
Traffic normalization, i.e. enforcing a constant stream of fixed-length packets, is a well-known measure to completely prevent attacks based on traffic analysis. In simple configurations, the enforced traffic rate can be statically configured by a human operator, but in large virtual private networks (VPNs) the traffic pattern of many connections may need to be adjusted whenever the overlay topology or the transport capacity of the underlying infrastructure changes. We propose a rate-based congestion control mechanism for automatic adjustment of traffic patterns that does not leak any information about the actual communication. Overly strong rate throttling in response to packet loss is avoided, as the control mechanism does not change the sending rate immediately when a packet loss was detected. Instead, an estimate of the current packet loss rate is obtained and the sending rate is adjusted proportionally. We evaluate our control scheme based on a measurement study in a local network testbed. The results indicate that the proposed approach avoids network congestion, enables protected TCP flows to achieve an increased goodput, and yet ensures appropriate traffic flow confidentiality.
This paper focuses on one type of Covert Storage Channel (CSC) that uses the 6-bit TCP flag header in TCP/IP network packets to transmit secret messages between accomplices. We use relative entropy to characterize the irregularity of network flows in comparison to normal traffic. A normal profile is created by the frequency distribution of TCP flags in regular traffic packets. In detection, the TCP flag frequency distribution of network traffic is computed for each unique IP pair. In order to evaluate the accuracy and efficiency of the proposed method, this study uses real regular traffic data sets as well as CSC messages using coding schemes under assumptions of both clear text, composed by a list of keywords common in Unix systems, and encrypted text. Moreover, smart accomplices may use only those TCP flags that are ever appearing in normal traffic. Then, in detection, the relative entropy can reveal the dissimilarity of a different frequency distribution from this normal profile. We have also used different data processing methods in detection: one method summarizes all the packets for a pair of IP addresses into one flow and the other uses a sliding moving window over such a flow to generate multiple frames of packets. The experimentation results, displayed by Receiver Operating Characteristic (ROC) curves, have shown that the method is promising to differentiate normal and CSC traffic packet streams. Furthermore the delay of raising an alert is analyzed for CSC messages to show its efficiency.
We consider the problem of covert communication over a state-dependent channel, where the transmitter has non-causal knowledge of the channel states. Here, “covert” means that the probability that a warden on the channel can detect the communication must be small. In contrast with traditional models without noncausal channel-state information at the transmitter, we show that covert communication can be possible with positive rate. We derive closed-form formulas for the maximum achievable covert communication rate (“covert capacity”) in this setting for discrete memoryless channels as well as additive white Gaussian noise channels. We also derive lower bounds on the rate of the secret key that is needed for the transmitter and the receiver to achieve the covert capacity.