Biblio
With the increasing popularity of cloud storage services, many individuals and enterprises start to move their local data to the clouds. To ensure their privacy and data security, some cloud service users may want to encrypt their data before outsourcing them. However, this impedes efficient data utilities based on the plain text search. In this paper, we study how to construct a secure index that supports both efficient index updating and similarity search. Using the secure index, users are able to efficiently perform similarity searches tolerating input mistakes and update the index when new data are available. We formally prove the security of our proposal and also perform experiments on real world data to show its efficiency.
Finding security vulnerabilities in the source code as early as possible is becoming more and more essential. In this respect, vulnerability prediction models have the potential to help the security assurance activities by identifying code locations that deserve the most attention. In this paper, we investigate whether prediction models behave like milk (i.e., they turn with time) or wine (i.e., the improve with time) when used to predict future vulnerabilities. Our findings indicate that the recall values are largely in favor of predictors based on older versions. However, the better recall comes at the price of much higher file inspection ratio values.
Memory disclosure vulnerabilities enable an adversary to successfully mount arbitrary code execution attacks against applications via so-called just-in-time code reuse attacks, even when those applications are fortified with fine-grained address space layout randomization. This attack paradigm requires the adversary to first read the contents of randomized application code, then construct a code reuse payload using that knowledge. In this paper, we show that the recently proposed Execute-no-Read (XnR) technique fails to prevent just-in-time code reuse attacks. Next, we introduce the design and implementation of a novel memory permission primitive, dubbed No-Execute-After-Read (near), that foregoes the problems of XnR and provides strong security guarantees against just-in-time attacks in commodity binaries. Specifically, near allows all code to be disclosed, but prevents any disclosed code from subsequently being executed, thus thwarting just-in-time code reuse. At the same time, commodity binaries with mixed code and data regions still operate correctly, as legitimate data is still readable. To demonstrate the practicality and portability of our approach we implemented prototypes for both Linux and Android on the ARMv8 architecture, as well as a prototype that protects unmodified Microsoft Windows executables and dynamically linked libraries. In addition, our evaluation on the SPEC2006 benchmark demonstrates that our prototype has negligible runtime overhead, making it suitable for practical deployment.
In threshold schemes one represents each sensitive variable by a number n of shares such that their (usually) bitwise sum equals that variable. These shares are initially generated in such a way that any subset of n-1 shares gives no information about the sensitive variable. Functions (S-boxes, mixing layers, round functions, etc.) are computed on the shares of the inputs resulting in the output as a number of shares. An essential property of a threshold implementation of a function is that each output share is computed from at most n-1 input shares. This is called incompleteness and guarantees that that computation cannot leak information about sensitive variables. The resulting output is then typically subject to some further computation, again in the form of separate, incomplete, computation on shares. For these subsequent computations to not leak information about the sensitive variables, the output of the previous stage must still be uniform. Hence, in an iterative cryptographic primitive such as a block cipher, we need a threshold implementation of the round function that yields a uniformly shared output if its input is uniformly shared. This property of the threshold implementation is called uniformity. Threshold schemes form a good protection mechanism against differential power analysis (DPA). In particular, using it allows building cryptographic hardware that is guaranteed to be unattackable with first-order DPA, assuming certain leakage models of the cryptographic hardware at hand and for a plausible definition of "first order". Constructing an incomplete threshold implementation of a non-linear function is rather straightforward. To offer resistance against first-order DPA, the number of shares equals the algebraic degree of the function plus one. However, constructing one that is at the same time incomplete and uniform may present a challenge. For instance, for the Keccak non-linear layer, incomplete 3-share threshold implementations are easy to generate but no uniform one is known. Exhaustive investigations have been performed on all small S-boxes (3 to 5 bits) and there are many S-boxes for which it is not known to build uniform threshold implementations with d+1 shares if their algebraic degree is d. Uniformity of a threshold implementation is essential in its information-theoretical proof of resistance against first-order DPA. However, given a non-uniform threshold implementation, it is not immediate how to exploit its non-uniformity in an attack. In my talk I discuss the local and global effects of non-uniformity in iterated functions and their significance on the resistance against DPA. I treat methods to quantitatively limit the amount of non-uniformity and to keep it away from where it may be harmful. These techniques are relatively cheap and can reduce non-uniformity to such a low level that it would require an astronomical amount of samples to measure it.
In this article, we describe a neighbour disjoint multipath (NDM) scheme that is shown to be more resilient amidst node or link failures compared to the two well-known node disjoint and edge disjoint multipath techniques. A centralised NDM was first conceptualised in our initial published work utilising the spatial diversity among multiple paths to ensure robustness against localised poor channel quality or node failures. Here, we further introduce a distributed version of our NDM algorithm adapting to the low-power and lossy network (LLN) characteristics. We implement our distributed NDM algorithm in Contiki OS on top of LOADng—a lightweight On-demand Ad hoc Distance Vector Routing protocol. We compare this implementation's performance with a standard IPv6 Routing Protocol for Low power and Lossy Networks (RPL), and also with basic LOADng, running in the Cooja simulator. Standard performance metrics such as packet delivery ratio, end-to-end latency, overhead and average routing table size are identified for the comparison. The results and observations are provided considering a few different application traffic patterns, which serve to quantify the improvements in robustness arising from NDM. The results are confirmed by experiments using a public sensor network testbed with over 100 nodes.
We present NonDex, a tool for detecting and debugging wrong assumptions on Java APIs. Some APIs have underdetermined specifications to allow implementations to achieve different goals, e.g., to optimize performance. When clients of such APIs assume stronger-than-specified guarantees, the resulting client code can fail. For example, HashSet’s iteration order is underdetermined, and code assuming some implementation-specific iteration order can fail. NonDex helps to proactively detect and debug such wrong assumptions. NonDex performs detection by randomly exploring different behaviors of underdetermined APIs during test execution. When a test fails during exploration, NonDex searches for the invocation instance of the API that caused the failure. NonDex is open source, well-integrated with Maven, and also runs from the command line. During our experiments with the NonDex Maven plugin, we detected 21 new bugs in eight Java projects from GitHub, and, using the debugging feature of NonDex, we identified the underlying wrong assumptions for these 21 new bugs and 54 previously detected bugs. We opened 13 pull requests; developers already accepted 12, and one project changed the continuous-integration configuration to run NonDex on every push. The demo video is at: https://youtu.be/h3a9ONkC59c
Black-holes, gray-holes and, wormholes, are devastating to the correct operation of any network. These attacks (among others) are based on the premise that packets will travel through compromised nodes, and methods exist to coax routing into these traps. Detection of these attacks are mainly centered around finding the subversion in action. In networks, bottleneck nodes -- those that sit on many potential routes between sender and receiver -- are an optimal location for compromise. Finding naturally occurring path bottlenecks, however, does not entitle network subversion, and as such are more difficult to detect. The dynamic nature of mobile ad-hoc networks (manets) causes ubiquitous routing algorithms to be even more susceptible to this class of attacks. Finding perceived bottlenecks in an olsr based manet, is able to capture between 50%-75% of data. In this paper we propose a method of subtly expanding perceived bottlenecks into complete bottlenecks, raising capture rate up to 99%; albeit, at high cost. We further tune the method to reduce cost, and measure the corresponding capture rate.
We introduce the Nondeterministic Strong Exponential Time Hypothesis (NSETH) as a natural extension of the Strong Exponential Time Hypothesis (SETH). We show that both refuting and proving NSETH would have interesting consequences. In particular we show that disproving NSETH would give new nontrivial circuit lower bounds. On the other hand, NSETH implies non-reducibility results, i.e. the absence of (deterministic) fine-grained reductions from SAT to a number of problems. As a consequence we conclude that unless this hypothesis fails, problems such as 3-SUM, APSP and model checking of a large class of first-order graph properties cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.
With the developing of Internet, network intrusion has become more and more common. Quickly identifying and preventing network attacks is getting increasingly more important and difficult. Machine learning techniques have already proven to be robust methods in detecting malicious activities and network threats. Ensemble-based and semi-supervised learning methods are some of the areas that receive most attention in machine learning today. However relatively little attention has been given in combining these methods. To overcome such limitations, this paper proposes a novel network anomaly detection method by using a combination of a tri-training approach with Adaboost algorithms. The bootstrap samples of tri-training are replaced by three different Adaboost algorithms to create the diversity. We run 30 iteration for every simulation to obtain the average results. Simulations indicate that our proposed semi-supervised Adaboost algorithm is reproducible and consistent over a different number of runs. It outperforms other state-of-the-art learning algorithms, even with a small part of labeled data in the training phase. Specifically, it has a very short execution time and a good balance between the detection rate as well as the false-alarm rate.
In this paper, we analyze the performance and cost trade-off from selecting two representations of nodes when implementing the Aho-Corasick algorithm. This algorithm can be used for pattern matching in network-based intrusion detection systems such as Snort. Our analysis uses the Snort 2.9.7 rules set, which contains almost 26k patterns. Our methodology consists of code profiling and analysis, followed by the selection of a parameter to maximize a metric that combines clock cycles count and memory usage. The parameter determines which of two types of nodes is selected for each trie node. We show that it is possible to select the parameter to optimize the metric, which results in an improvement by up to 12× compared with the single node-type case.
Large scale applications in data centers are composed of computers connected with a network. Traditional network switches cannot be flexibly controlled. Then, application developer cannot optimize network elements' behavior for improving application performance. On the other hand, Deeply Programmable Network (DPN) switches can completely analyze packet payloads and be profoundly programmed. In this paper, we focus on processing a part of application functions in network elements for improving application performance based on Deep Packet Inspection (DPI), i.e. analyzing packet payload, using DPN switches. We assume some applications as targets and implement some of functions of applications in network switches. We then present the comparison of performances with and without out method, and show that our method can significantly increase application performance.
We propose a modification to the framework of Universally Composable (UC) security [3]. Our new notion involves comparing the real protocol execution with an ideal execution involving ideal functionalities (just as in UC-security), but allowing the environment and adversary access to some super-polynomial computational power. We argue the meaningfulness of the new notion, which in particular subsumes many of the traditional notions of security. We generalize the Universal Composition theorem of [3] to the new setting. Then under new computational assumptions, we realize secure multi-party computation (for static adversaries) without a common reference string or any other set-up assumptions, in the new framework. This is known to be impossible under the UC framework.
The threat that malicious insiders pose towards organisations is a significant problem. In this paper, we investigate the task of detecting such insiders through a novel method of modelling a user's normal behaviour in order to detect anomalies in that behaviour which may be indicative of an attack. Specifically, we make use of Hidden Markov Models to learn what constitutes normal behaviour, and then use them to detect significant deviations from that behaviour. Our results show that this approach is indeed successful at detecting insider threats, and in particular is able to accurately learn a user's behaviour. These initial tests improve on existing research and may provide a useful approach in addressing this part of the insider-threat challenge.
This paper establishes a new framework for electrical cyber-physical systems (ECPSs). The communication network is designed by the characteristics of a power grid. The interdependent relationship of communication networks and power grids is described by data-uploading channels and commands-downloading channels. Control strategies (such as load shedding and relay protection) are extended to this new framework for analyzing the performance of ECPSs under several attack scenarios. The fragility of ECPSs under cyber attacks (DoS attack and false data injection attack) and the effectiveness of relay protection policies are verified by experimental results.
Globalization of semiconductor design, manufacturing, packaging and testing has led to several security issues like over production of chips, shipping of faulty or partially functional chips, intellectual property infringement, cloning, counterfeit chips and insertion of hardware trojans in design house or at foundry etc. Adversaries will extract chips from obsolete PCB's and release used parts as new chips into the supply chain. The faulty chips or partially functioning chips can enter supply chain from untrusted Assembly Packaging and Test (APT) centers. These counterfeit parts are not reliable and cause catastrophic consequences in critical applications. To mitigate the counterfeits entering supply chain, to protect the Intellectual Property (IP) rights of owners and to meter the chip, Secure Split Test (SST) is a promising solution. CSST (Connecticut SST) is an improvement to SST, which simplifies the communication required between ATP center and design house. CSST addresses the scan tests, but it does not address the functional testing of chips. The functional testing of chips during production testing is critical in weeding out faulty chips in recent times. In this paper, we present a method called PUF-SST (Physical Unclonable Function – SST) to perform both scan tests and functional tests without compromising on security features described in CSST.
In order to enhance the supply chain security at airports, the German federal ministry of education and research has initiated the project ESECLOG (enhanced security in the air cargo chain) which has the goal to improve the threat detection accuracy using one-sided access methods. In this paper, we present a new X-ray backscatter technology for non-intrusive imaging of suspicious objects (mainly low-Z explosives) in luggage's and parcels with only a single-sided access. A key element in this technology is the X-ray backscatter camera embedded with a special twisted-slit collimator. The developed technology has efficiently resolved the problem related to the imaging of complex interior of the object by fixing source and object positions and changing only the scanning direction of the X-ray backscatter camera. Experiments were carried out on luggages and parcels packed with mock-up dangerous materials including liquid and solid explosive simulants. In addition, the quality of the X-ray backscatter image was enhanced by employing high-resolution digital detector arrays. Experimental results are discussed and the efficiency of the present technique to detect suspicious objects in luggages and parcels is demonstrated. At the end, important applications of the proposed backscatter imaging technology to the aviation security are presented.
Blind objective metrics to automatically quantify perceived image quality degradation introduced by blur, is highly beneficial for current digital imaging systems. We present, in this paper, a perceptual no reference blur assessment metric developed in the frequency domain. As blurring affects specially edges and fine image details, that represent high frequency components of an image, the main idea turns on analysing, perceptually, the impact of blur distortion on high frequencies using the Discrete Cosine Transform DCT and the Just noticeable blur concept JNB relying on the Human Visual System. Comprehensive testing demonstrates the proposed Perceptual Blind Blur Quality Metric (PBBQM) good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative non perceptual and perceptual state-of-the-art blind blur quality measures.
This paper proposes an improved mesh simplification algorithm based on quadric error metrics (QEM) to efficiently processing the huge data in 3D image processing. This method fully uses geometric information around vertices to avoid model edge from being simplified and to keep details. Meanwhile, the differences between simplified triangular meshes and equilateral triangles are added as weights of errors to decrease the possibilities of narrow triangle and then to avoid the visual mutation. Experiments show that our algorithm has obvious advantages over the time cost, and can better save the visual characteristics of model, which is suitable for solving most image processing, that is, "Real-time interactive" problem.
With the recent developments in the field of visual sensor technology, multiple imaging sensors are used in several applications such as surveillance, medical imaging and machine vision, in order to improve their capabilities. The goal of any efficient image fusion algorithm is to combine the visual information, obtained from a number of disparate imaging sensors, into a single fused image without the introduction of distortion or loss of information. The existing fusion algorithms employ either the mean or choose-max fusion rule for selecting the best features for fusion. The choose-max rule distorts constants background information whereas the mean rule blurs the edges. In this paper, Non-Subsampled Contourlet Transform (NSCT) based two feature-level fusion schemes are proposed and compared. In the first method Fuzzy logic is applied to determine the weights to be assigned to each segmented region using the salient region feature values computed. The second method employs Golden Section Algorithm (GSA) to achieve the optimal fusion weights of each region based on its Petrovic metric. The regions are merged adaptively using the weights determined. Experiments show that the proposed feature-level fusion methods provide better visual quality with clear edge information and objective quality metrics than individual multi-resolution-based methods such as Dual Tree Complex Wavelet Transform and NSCT.
Next generation cellular networks will provide users better experiences by densely deploying smaller cells, which results in more complicated interferences environment. In order to coordinate interference, power control for uplink is particularly challenging due to random locations of uplink transmitter and dense deployment. In this paper, we address the uplink fractional power control (FPC) optimization problem from network optimization perspective. The relations between FPC parameters and network KPIs (Key Performance Indicators) are investigated. Rather than considering any single KPI in conventional approaches, multi-KPI optimization problem is formulated and solved. By relaxing the discrete optimization problem to a continuous one, the gradients of multiple KPIs with respect to FPC parameters are derived. The gradient enables efficiently searching for optimized FPC parameters which is particularly desirable for dense deployment of large number of cells. Simulation results show that the proposed scheme greatly outperforms the traditional one, in terms of network mean load, call drop & block ratio, and convergence speed.
Genes, proteins and other metabolites present in cellular environment exhibit a virtual network that represents the regulatory relationship among its constituents. This network is called Gene Regulatory Network (GRN). Computational reconstruction of GRN reveals the normal metabolic pathway as well as disease motifs. Availability of microarray gene expression data from normal and diseased tissues makes the job easier for computational biologists. Reconstruction of GRN is based on neural modeling. Here we have used discrete and continuous versions of a meta-heuristic algorithm named Firefly algorithm for structure and parameter learning of GRNs respectively. The discrete version for this problem is proposed by us and it has been applied to explore the discrete search space of GRN structure. To evaluate performance of the algorithm, we have used a widely used synthetic GRN data set. The algorithm shows an accuracy rate above 50% in finding GRN. The accuracy level of the performance of Firefly algorithm in structure and parameter optimization of GRN is promising.
Wireless Mesh Networks (WMNs) are being considered as most adequate for deployment in the Neighborhood Area Network (NAN) domain of the smart grid infrastructure because their features such as self-organizing, scalability and cost-efficiency complement the NAN requirements. To enhance the security of the WMNs, the key refreshment strategy for the Simultaneous Authentication of Equals (SAE) or the Efficient Mesh Security Association (EMSA) protocols is an efficient way to make the network more resilient against the cyber-attacks. However, a security vulnerability is discovered in the EMSA protocol when using the key refreshment strategy. The first message of the Mesh Key Holder Security Handshake (MKHSH) can be forged and replayed back in the next cycles of the key refreshment leading to a Denial of Service (DoS) attack. In this paper, a simple one-way hash function based scheme is proposed to prevent the unprotected message from being replayed together with an enhancement to the key refreshment scheme to improve the resilience of the MKHSH. The Protocol Composition Logic (PCL) is used to verify the logical correctness of the proposed scheme, while the Process Analysis Toolkit (PAT) is used to evaluate the security functionality against the malicious attacks.
Once data is released to the Internet, there is little hope to successfully delete it, as it may have been duplicated, reposted, and archived in multiple places. This poses a significant threat to users' privacy and their right to permanently erase their very own data. One approach to control the implications on privacy is to assign a lifetime value to the published data and ensure that the data is no longer accessible after this point in time. However, such an approach suffers from the inability to successfully predict the right time when the data should vanish. Consequently, the author of the data can only estimate the correct time, which unfortunately can cause the premature or belated deletion of data. This paper tackles the problem of prefixed lifetimes in data deletion from a different angle and argues that alternative approaches are a desideratum for research. In our approach, we consider different criteria when data should be deleted, such as keeping data available as long as there is sufficient interest for it or untimely delete it in cases of excessive accesses. To assist the self-destruction of data, we propose a protocol and develop a prototype, called Neuralyzer, which leverages the caching mechanisms of the Domain Name System (DNS) to ensure the successful deletion of data. Our experimental results demonstrate that our approach can completely delete published data while at the same time achieving flexible expiration times varying from few days to several months depending on the users' interest.
Device-free localization of people and objects indoors not equipped with radios is playing a critical role in many emerging applications. This paper presents a novel channel state information (CSI) pre-processing scheme that enables accurate device-free localization indoors. The basic idea is simple: CSI is sensitive to a target's location and by modelling the CSI measurements of multiple wireless links as a set of power fading based equations, the target location can be determined. However, due to rich multipaths in indoor environment, the received signal strength (RSS) or even the fine-grained CSI can not be easily modelled. We observe that even in a rich multipath environment, not all subcarriers are equally affected by multipath reflections. Our preprocessing scheme tries to identify the subcarriers not affected by multipath. Thus, CSIs on the "clean" subcarriers can be modelled and utilized for accurate localization. Extensive experiments demonstrate the effectiveness of the proposed pre-processing scheme.