Biblio
Secure information flow guarantees the secrecy and integrity of data, preventing an attacker from learning secret information (secrecy) or injecting untrusted information (integrity). Covert channels can be used to subvert these security guarantees, for example, timing and termination channels can, either intentionally or inadvertently, violate these guarantees by modifying the timing or termination behavior of a program based on secret or untrusted data. Attacks using these covert channels have been published and are known to work in practiceâ as techniques to prevent non-covert channels are becoming increasingly practical, covert channels are likely to become even more attractive for attackers to exploit. The goal of this paper is to understand the subtleties of timing and termination-sensitive noninterference, explore the space of possible strategies for enforcing noninterference guarantees, and formalize the exact guarantees that these strategies can enforce. As a result of this effort we create a novel strategy that provides stronger security guarantees than existing work, and we clarify claims in existing work about what guarantees can be made.
Information fusion deals with the integration and merging of data and information from multiple (heterogeneous) sources. In many cases, the information that needs to be fused has security classification. The result of the fusion process is then by necessity restricted with the strictest information security classification of the inputs. This has severe drawbacks and limits the possible dissemination of the fusion results. It leads to decreased situational awareness: the organization knows information that would enable a better situation picture, but since parts of the information is restricted, it is not possible to distribute the most correct situational information. In this paper, we take steps towards defining fusion and data mining processes that can be used even when all the underlying data that was used cannot be disseminated. The method we propose here could be used to produce a classifier where all the sensitive information has been removed and where it can be shown that an antagonist cannot even in principle obtain knowledge about the classified information by using the classifier or situation picture.
This paper discusses the detection of hardware Trojans (HTs) by their breaking of symmetries within integrated circuits (ICs), as measured by path delays. Typically, path delay or side channel methods rely on comparisons to a golden, or trusted, sample. However, golden standards are affected by inter-and intra-die variations which limit the confidence in such comparisons. Symmetry is a way to detect modifications to an IC with increased confidence by confirming subcircuit consistencies within as it was originally designed. The difference in delays from a given path to a set of symmetric paths will be the same unless an inserted HT breaks symmetry. Symmetry can naturally exist in ICs or be artificially added. We describe methods to find and measure path delays against symmetric paths, as well as the advantages and disadvantages of this method. We discuss results of examples from benchmark circuits demonstrating the detection of hardware Trojans.
With the growth of the Internet, web applications are becoming very popular in the user communities. However, the presence of security vulnerabilities in the source code of these applications is raising cyber crime rate rapidly. It is required to detect and mitigate these vulnerabilities before their exploitation in the execution environment. Recently, Open Web Application Security Project (OWASP) and Common Vulnerabilities and Exposures (CWE) reported Cross-Site Scripting (XSS) as one of the most serious vulnerabilities in the web applications. Though many vulnerability detection approaches have been proposed in the past, existing detection approaches have the limitations in terms of false positive and false negative results. This paper proposes a context-sensitive approach based on static taint analysis and pattern matching techniques to detect and mitigate the XSS vulnerabilities in the source code of web applications. The proposed approach has been implemented in a prototype tool and evaluated on a public data set of 9408 samples. Experimental results show that proposed approach based tool outperforms over existing popular open source tools in the detection of XSS vulnerabilities.
The manufacturing process of electrical machines influences the geometric dimensions and material properties, e.g. the yoke thickness. These influences occur by statistical variation as manufacturing tolerances. The effect of these tolerances and their potential impact on the mechanical torque output is not fully studied up to now. This paper conducts a sensitivity analysis for geometric and material parameters. For the general approach these parameters are varied uniformly in a range of 10 %. Two dimensional finite element analysis is used to simulate the influences at three characteristic operating points. The studied object is an internal permanent magnet machine in the 100 kW range used for hybrid drive applications. The results show a significant dependency on the rotational speed. The general validity is studied by using boundary condition variations and two further machine designs. This procedure offers the comparison of matching qualitative results for small quantitative deviations. For detecting the impact of the manufacturing process realistic tolerance ranges are used. This investigation identifies the airgap and magnet remanence induction as the main parameters for potential torque fluctuation.
In this paper, we introduce a fast, secure and robust scheme for digital image encryption using chaotic system of Lorenz, 4D hyper-chaotic system and the Secure Hash Algorithm SHA-1. The encryption process consists of three layers: sub-vectors confusion and two-diffusion process. In the first layer we divide the plainimage into sub-vectors then, the position of each one is changed using the chaotic index sequence generated with chaotic attractor of Lorenz, while the diffusion layers use hyper-chaotic system to modify the values of pixels using an XOR operation. The results of security analysis such as statistical tests, differential attacks, key space, key sensitivity, entropy information and the running time are illustrated and compared to recent encryption schemes where the highest security level and speed are improved.
Enterprises usually provide strong controls to prevent cyberattacks and inadvertent leakage of data to external entities. However, in the case where employees and data scientists have legitimate access to analyze and derive insights from the data, there are insufficient controls and employees are usually permitted access to all information about the customers of the enterprise including sensitive and private information. Though it is important to be able to identify useful patterns of one's customers for better customization and service, customers' privacy must not be sacrificed to do so. We propose an alternative — a framework that will allow privacy preserving data analytics over big data. In this paper, we present an efficient and scalable framework for Apache Spark, a cluster computing framework, that provides strong privacy guarantees for users even in the presence of an informed adversary, while still providing high utility for analysts. The framework, titled Shade, includes two mechanisms — SparkLAP, which provides Laplacian perturbation based on a user's query and SparkSAM, which uses the contents of the database itself in order to calculate the perturbation. We show that the performance of Shade is substantially better than earlier differential privacy systems without loss of accuracy, particularly when run on datasets small enough to fit in memory, and find that SparkSAM can even exceed performance of an identical nonprivate Spark query.
This paper presents a new fractional-order hidden strange attractor generated by a chaotic system without equilibria. The proposed non-equilibrium fractional-order chaotic system (FOCS) is asymmetric, dissimilar, topologically inequivalent to typical chaotic systems and challenges the conventional notion that the presence of unstable equilibria is mandatory to ensure the existence of chaos. The new fractional-order model displays rich bifurcation undergoing a period doubling route to chaos, where the fractional order α is the bifurcation parameter. Study of the hidden attractor dynamics is carried out with the aid of phase portraits, sensitivity to initial conditions, fractal Lyapunov dimension, maximum Lyapunov exponents spectrum and bifurcation analysis. The minimum commensurate dimension to display chaos is determined. With a view to utilizing it in chaos based cryptology and coding information, a synchronisation control scheme is designed. Finally the theoretical analyses are validated by numerical simulation results which are in good agreement with the former.
This paper presents a new approach for a dynamic curtailment method for renewable energy sources that guarantees fulfilling of (n-1)-security criteria of the system. Therefore, it is applicable to high voltage distribution grids and has compliance to their planning guidelines. The proposed dynamic curtailment method specifically reduces the power feed-in of renewable energy sources up to a level, where no thermal constraint is exceeded in the (n-1)-state of the system. Based on AC distribution factors, a new formulation of line outage distribution factors is presented that is applicable for outages consisting of a single line or multiple segment lines. The proposed method is tested using a planning study of a real German high voltage distribution grid. The results show that any thermal loading limits are exceeded by using the dynamic curtailment approach. Therefore, a significant reduction of the grid reinforcement can be achieved by using a small amount of curtailed annual energy from renewable energy sources.
This paper investigates the privacy-preserving problem of the distributed consensus-based energy management considering both generation units and responsive demands in smart grid. First, we reveal the private information of consumers including the electricity consumption and the sensitivity of the electricity consumption to the electricity price can be disclosed without any privacy-preserving strategy. Then, we propose a privacy-preserving algorithm to preserve the private information of consumers through designing the secret functions, and adding zero-sum and exponentially decreasing noises. We also prove that the proposed algorithm can preserve the privacy while keeping the optimality of the final state and the convergence performance unchanged. Extensive simulations validate the theoretical results and demonstrate the effectiveness of the proposed algorithm.
Analytics in big data is maturing and moving towards mass adoption. The emergence of analytics increases the need for innovative tools and methodologies to protect data against privacy violation. Many data anonymization methods were proposed to provide some degree of privacy protection by applying data suppression and other distortion techniques. However, currently available methods suffer from poor scalability, performance and lack of framework standardization. Current anonymization methods are unable to cope with the massive size of data processing. Some of these methods were especially proposed for MapReduce framework to operate in Big Data. However, they still operate in conventional data management approaches. Therefore, there were no remarkable gains in the performance. We introduce a framework that can operate in MapReduce environment to benefit from its advantages, as well as from those in Hadoop ecosystems. Our framework provides a granular user's access that can be tuned to different authorization levels. The proposed solution provides a fine-grained alteration based on the user's authorization level to access MapReduce domain for analytics. Using well-developed role-based access control approaches, this framework is capable of assigning roles to users and map them to relevant data attributes.
High detection sensitivity in the presence of process variation is a key challenge for hardware Trojan detection through side channel analysis. In this work, we present an efficient Trojan detection approach in the presence of elevated process variations. The detection sensitivity is sharpened by 1) comparing power levels from neighboring regions within the same chip so that the two measured values exhibit a common trend in terms of process variation, and 2) generating test patterns that toggle each cell multiple times to increase Trojan activation probability. Detection sensitivity is analyzed and its effectiveness demonstrated by means of RPD (relative power difference). We evaluate our approach on ISCAS'89 and ITC'99 benchmarks and the AES-128 circuit for both combinational and sequential type Trojans. High detection sensitivity is demonstrated by analysis on RPD under a variety of process variation levels and experiments for Trojan inserted circuits.
Vectorless integrity verification is becoming increasingly critical to robust design of nanoscale power delivery networks (PDNs). To dramatically improve efficiency and capability of vectorless integrity verifications, this paper introduces a scalable multilevel integrity verification framework by leveraging a hierarchy of almost linear-sized spectral power grid sparsifiers that can well retain effective resistances between nodes, as well as a recent graph-theoretic algebraic multigrid (AMG) algorithmic framework. As a result, vectorless integrity verification solution obtained on coarse level problems can effectively help find the solution of the original problem. Extensive experimental results show that the proposed vectorless verification framework can always efficiently and accurately obtain worst-case scenarios in even very large power grid designs.
In order to investigate the relationship and effect on the performance of magnetic modulator among applied DC current, excitation source, excitation loop current, sensitivity and induced voltage of detecting winding, this paper measured initial permeability, maximum permeability, saturation magnetic induction intensity, remanent magnetic induction intensity, coercivity, saturated magnetic field intensity, magnetization curve, permeability curve and hysteresis loop of main core 1J85 permalloy of magnetic modulator based on ballistic method. On this foundation, employ curve fitting tool of MATLAB; adopt multiple regression method to comprehensively compare and analyze the sum of squares due to error (SSE), coefficient of determination (R-square), degree-of-freedom adjusted coefficient of determination (Adjusted R-square), and root mean squared error (RMSE) of fitting results. Finally, establish B-H curve mathematical model based on the sum of arc-hyperbolic sine function and polynomial.
Poisoning attack in which an adversary misleads the learning process by manipulating its training set significantly affect the performance of classifiers in security applications. This paper proposed a robust learning method which reduces the influences of attack samples on learning. The sensitivity, defined as the fluctuation of the output with small perturbation of the input, in Localized Generalization Error Model (L-GEM) is measured for each training sample. The classifier's output on attack samples may be sensitive and inaccurate since these samples are different from other untainted samples. An import score is assigned to each sample according to its localized generalization error bound. The classifier is trained using a new training set obtained by resampling the samples according to their importance scores. RBFNN is applied as the classifier in experimental evaluation. The proposed model outperforms than the traditional one under the well-known label flip poisoning attacks including nearest-first and farthest-first flips attack.
Distributed data aggregation via summation (counting) helped us to learn the insights behind the raw data. However, such computing suffered from a high privacy risk of malicious collusion attacks. That is, the colluding adversaries infer a victim's privacy from the gaps between the aggregation outputs and their source data. Among the solutions against such collusion attacks, Distributed Differential Privacy (DDP) shows a significant effect of privacy preservation. Specifically, a DDP scheme guarantees the global differential privacy (the presence or absence of any data curator barely impacts the aggregation outputs) by ensuring local differential privacy at the end of each data curator. To guarantee an overall privacy performance of a distributed data aggregation system against malicious collusion attacks, part of the existing work on such DDP scheme aim to provide an estimated lower bound of privacy budget for the global differential privacy. However, there are two main problems: low data utility from using a large global function sensitivity; unknown privacy guarantee when the aggregation sensitivity of the whole system is less than the sum of the data curator's aggregation sensitivity. To address these problems while ensuring distributed differential privacy, we provide a new lower bound of privacy budget, which works with an unconditional aggregation sensitivity of the whole distributed system. Moreover, we study the performance of our privacy bound in different scenarios of data updates. Both theoretical and experimental evaluations show that our privacy bound offers better global privacy performance than the existing work.
Comparing with the traditional grid, energy internet will collect data widely and connect more broader. The analysis of electrical data use of Non-intrusive Load Monitoring (NILM) can infer user behavior privacy. Consideration both data security and availability is a problem must be addressed. Due to its rigid and provable privacy guarantee, Differential Privacy has proverbially reached and applied to privacy preserving data release and data mining. Because of its high sensitivity, increases the noise directly will led to data unavailable. In this paper, we propose a differentially private mechanism to protect energy internet privacy. Our focus is the aggregated data be released by data owner after added noise in disaggregated data. The theoretically proves and experiments show that our scheme can achieve the purpose of privacy-preserving and data availability.
It is a challenging problem to preserve the friendly-correlations between individuals when publishing social-network data. To alleviate this problem, uncertain graph has been presented recently. The main idea of uncertain graph is converting an original graph into an uncertain form, where the correlations between individuals is an associated probability. However, the existing methods of uncertain graph lack rigorous guarantees of privacy and rely on the assumption of adversary's knowledge. In this paper we first introduced a general model for constructing uncertain graphs. Then, we proposed an algorithm under the model which is based on differential privacy and made an analysis of algorithm's privacy. Our algorithm provides rigorous guarantees of privacy and against the background knowledge attack. Finally, the algorithm we proposed satisfied differential privacy and showed feasibility in the experiments. And then, we compare our algorithm with (k, ε)-obfuscation algorithm in terms of data utility, the importance of nodes for network in our algorithm is similar to (k, ε)-obfuscation algorithm.
Differential privacy is an approach that preserves patient privacy while permitting researchers access to medical data. This paper presents mechanisms proposed to satisfy differential privacy while answering a given workload of range queries. Representing input data as a vector of counts, these methods partition the vector according to relationships between the data and the ranges of the given queries. After partitioning the vector into buckets, the counts of each bucket are estimated privately and split among the bucket's positions to answer the given query set. The performance of the proposed method was evaluated using different workloads over several attributes. The results show that partitioning the vector based on the data can produce more accurate answers, while partitioning the vector based on the given workload improves privacy. This paper's two main contributions are: (1) improving earlier work on partitioning mechanisms by building a greedy algorithm to partition the counts' vector efficiently, and (2) its adaptive algorithm considers the sensitivity of the given queries before providing results.
Hardware Trojans have become in the last decade a major threat in the Integrated Circuit industry. Many techniques have been proposed in the literature aiming at detecting such malicious modifications in fabricated ICs. For the most critical circuits, prevention methods are also of interest. The goal of such methods is to prevent the insertion of a Hardware Trojan thanks to ad-hoc design rules. In this paper, we present a novel prevention technique based on approximation. An approximate logic circuit is a circuit that performs a possibly different but closely related logic function, so that it can be used for error detection or error masking where it overlaps with the original circuit. We will show how this technique can successfully detect the presence of Hardware Trojans, with a solution that has a smaller impact than triplication.
A hardware Trojan (HT) denotes the malicious addition or modification of circuit elements. The purpose of this work is to improve the HT detection sensitivity in ICs using power side-channel analysis. This paper presents three detection techniques in power based side-channel analysis by increasing Trojan-to-circuit power consumption and reducing the variation effect in the detection threshold. Incorporating the three proposed methods has demonstrated that a realistic fine-grain circuit partitioning and an improved pattern set to increase HT activation chances can magnify Trojan detectability.
This paper presents a possible implementation of progressive authentication using the Android pattern lock. Our key idea is to use one pattern for two access levels to the device; an abridged pattern is used to access generic applications and a second, extended and higher-complexity pattern is used less frequently to access more sensitive applications. We conducted a user study of 89 participants and a consecutive user survey on those participants to investigate the usability of such a pattern scheme. Data from our prototype showed that for unlocking lowsecurity applications the median unlock times for users of the multiple pattern scheme and conventional pattern scheme were 2824 ms and 5589 ms respectively, and the distributions in the two groups differed significantly (Mann-Whitney U test, p-value less than 0.05, two-tailed). From our user survey, we did not find statistically significant differences between the two groups for their qualitative responses regarding usability and security (t-test, p-value greater than 0.05, two-tailed), but the groups did not differ by more than one satisfaction rating at 90% confidence.