Biblio
This paper presents a novel and efficient audio signal recognition algorithm with limited computational complexity. As the audio recognition system will be used in real world environment where background noises are high, conventional speech recognition techniques are not directly applicable, since they have a poor performance in these environments. So here, we introduce a new audio recognition algorithm which is optimized for mechanical sounds such as car horn, telephone ring etc. This is a hybrid time-frequency approach which makes use of acoustic fingerprint for the recognition of audio signal patterns. The limited computational complexity is achieved through efficient usage of both time domain and frequency domain in two different processing phases, detection and recognition respectively. And the transition between these two phases is carried out through a finite state machine(FSM)model. Simulation results shows that the algorithm effectively recognizes audio signals within a noisy environment.
Security companies have recently realised that mining massive amounts of security data can help generate actionable intelligence and improve their understanding of Internet attacks. In particular, attack attribution and situational understanding are considered critical aspects to effectively deal with emerging, increasingly sophisticated Internet attacks. This requires highly scalable analysis tools to help analysts classify, correlate and prioritise security events, depending on their likely impact and threat level. However, this security data mining process typically involves a considerable amount of features interacting in a non-obvious way, which makes it inherently complex. To deal with this challenge, we introduce MR-TRIAGE, a set of distributed algorithms built on MapReduce that can perform scalable multi-criteria data clustering on large security data sets and identify complex relationships hidden in massive datasets. The MR-TRIAGE workflow is made of a scalable data summarisation, followed by scalable graph clustering algorithms in which we integrate multi-criteria evaluation techniques. Theoretical computational complexity of the proposed parallel algorithms are discussed and analysed. The experimental results demonstrate that the algorithms can scale well and efficiently process large security datasets on commodity hardware. Our approach can effectively cluster any type of security events (e.g., spam emails, spear-phishing attacks, etc) that are sharing at least some commonalities among a number of predefined features.
In the era of big data, many users and companies start to move their data to cloud storage to simplify data management and reduce data maintenance cost. However, security and privacy issues become major concerns because third-party cloud service providers are not always trusty. Although data contents can be protected by encryption, the access patterns that contain important information are still exposed to clouds or malicious attackers. In this paper, we apply the ORAM algorithm to enable privacy-preserving access to big data that are deployed in distributed file systems built upon hundreds or thousands of servers in a single or multiple geo-distributed cloud sites. Since the ORAM algorithm would lead to serious access load unbalance among storage servers, we study a data placement problem to achieve a load balanced storage system with improved availability and responsiveness. Due to the NP-hardness of this problem, we propose a low-complexity algorithm that can deal with large-scale problem size with respect to big data. Extensive simulations are conducted to show that our proposed algorithm finds results close to the optimal solution, and significantly outperforms a random data placement algorithm.
Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.
To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.
Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.
The secure hash algorithm (SHA)-3 has been selected in 2012 and will be used to provide security to any application which requires hashing, pseudo-random number generation, and integrity checking. This algorithm has been selected based on various benchmarks such as security, performance, and complexity. In this paper, in order to provide reliable architectures for this algorithm, an efficient concurrent error detection scheme for the selected SHA-3 algorithm, i.e., Keccak, is proposed. To the best of our knowledge, effective countermeasures for potential reliability issues in the hardware implementations of this algorithm have not been presented to date. In proposing the error detection approach, our aim is to have acceptable complexity and performance overheads while maintaining high error coverage. In this regard, we present a low-complexity recomputing with rotated operands-based scheme which is a step-forward toward reducing the hardware overhead of the proposed error detection approach. Moreover, we perform injection-based fault simulations and show that the error coverage of close to 100% is derived. Furthermore, we have designed the proposed scheme and through ASIC analysis, it is shown that acceptable complexity and performance overheads are reached. By utilizing the proposed high-performance concurrent error detection scheme, more reliable and robust hardware implementations for the newly-standardized SHA-3 are realized.
Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing ℓ2,1-norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
Wireless security has been an active research area since the last decade. A lot of studies of wireless security use cryptographic tools, but traditional cryptographic tools are normally based on computational assumptions, which may turn out to be invalid in the future. Consequently, it is very desirable to build cryptographic tools that do not rely on computational assumptions. In this paper, we focus on a crucial cryptographic tool, namely 1-out-of-2 oblivious transfer. This tool plays a central role in cryptography because we can build a cryptographic protocol for any polynomial-time computable function using this tool. We present a novel 1-out-of-2 oblivious transfer protocol based on wireless channel characteristics, which does not rely on any computational assumption. We also illustrate the potential broad applications of this protocol by giving two applications, one on private communications and the other on privacy preserving password verification. We have fully implemented this protocol on wireless devices and conducted experiments in real environments to evaluate the protocol. Our experimental results demonstrate that it has reasonable efficiency.
Deadlock freedom is a key challenge in the design of communication networks. Wormhole switching is a popular switching technique, which is also prone to deadlocks. Deadlock analysis of routing functions is a manual and complex task. We propose an algorithm that automatically proves routing functions deadlock-free or outputs a minimal counter-example explaining the source of the deadlock. Our algorithm is the first to automatically check a necessary and sufficient condition for deadlock-free routing. We illustrate its efficiency in a complex adaptive routing function for torus topologies. Results are encouraging. Deciding deadlock freedom is co-NP-Complete for wormhole networks. Nevertheless, our tool proves a 13 × 13 torus deadlock-free within seconds. Finding minimal deadlocks is more difficult. Our tool needs four minutes to find a minimal deadlock in a 11 × 11 torus while it needs nine hours for a 12 × 12 network.
By enabling a direct comparison of different security solutions with respect to their relative effectiveness, a network security metric may provide quantifiable evidences to assist security practitioners in securing computer networks. However, research on security metrics has been hindered by difficulties in handling zero-day attacks exploiting unknown vulnerabilities. In fact, the security risk of unknown vulnerabilities has been considered as something unmeasurable due to the less predictable nature of software flaws. This causes a major difficulty to security metrics, because a more secure configuration would be of little value if it were equally susceptible to zero-day attacks. In this paper, we propose a novel security metric, k-zero day safety, to address this issue. Instead of attempting to rank unknown vulnerabilities, our metric counts how many such vulnerabilities would be required for compromising network assets; a larger count implies more security because the likelihood of having more unknown vulnerabilities available, applicable, and exploitable all at the same time will be significantly lower. We formally define the metric, analyze the complexity of computing the metric, devise heuristic algorithms for intractable cases, and finally demonstrate through case studies that applying the metric to existing network security practices may generate actionable knowledge.
There are more and more systems using mobile devices to perform sensing tasks, but these increase the risk of leakage of personal privacy and data. Data hiding is one of the important ways for information security. Even though many data hiding algorithms have worked on providing more hiding capacity or higher PSNR, there are few algorithms that can control PSNR effectively while ensuring hiding capacity. In this paper, with controllable PSNR based on LSBs substitution- PSNR-Controllable Data Hiding (PCDH), we first propose a novel encoding plan for data hiding. In PCDH, we use the remainder algorithm to calculate the hidden information, and hide the secret information in the last x LSBs of every pixel. Theoretical proof shows that this method can control the variation of stego image from cover image, and control PSNR by adjusting parameters in the remainder calculation. Then, we design the encoding and decoding algorithms with low computation complexity. Experimental results show that PCDH can control the PSNR in a given range while ensuring high hiding capacity. In addition, it can resist well some steganalysis. Compared to other algorithms, PCDH achieves better tradeoff among PSNR, hiding capacity, and computation complexity.
In recent years, we have seen a number of successful attacks against high-profile targets, some of which have even caused severe physical damage. These examples have shown us that resourceful and determined attackers can penetrate virtually any system, even those that are secured by the "air-gap." Consequently, in order to minimize the impact of stealthy attacks, defenders have to focus not only on strengthening the first lines of defense but also on deploying effective intrusion-detection systems. Intrusion-detection systems can play a key role in protecting sensitive computer systems since they give defenders a chance to detect and mitigate attacks before they could cause substantial losses. However, an over-sensitive intrusion-detection system, which produces a large number of false alarms, imposes prohibitively high operational costs on a defender since alarms need to be manually investigated. Thus, defenders have to strike the right balance between maximizing security and minimizing costs. Optimizing the sensitivity of intrusion detection systems is especially challenging in the case when multiple inter-dependent computer systems have to be defended against a strategic attacker, who can target computer systems in order to maximize losses and minimize the probability of detection. We model this scenario as an attacker-defender security game and study the problem of finding optimal intrusion detection thresholds.
We introduce the Nondeterministic Strong Exponential Time Hypothesis (NSETH) as a natural extension of the Strong Exponential Time Hypothesis (SETH). We show that both refuting and proving NSETH would have interesting consequences. In particular we show that disproving NSETH would give new nontrivial circuit lower bounds. On the other hand, NSETH implies non-reducibility results, i.e. the absence of (deterministic) fine-grained reductions from SAT to a number of problems. As a consequence we conclude that unless this hypothesis fails, problems such as 3-SUM, APSP and model checking of a large class of first-order graph properties cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.
The security game is a basic model for resource allocation in adversarial environments. Here there are two players, a defender and an attacker. The defender wants to allocate her limited resources to defend critical targets and the attacker seeks his most favorable target to attack. In the past decade, there has been a surge of research interest in analyzing and solving security games that are motivated by applications from various domains. Remarkably, these models and their game-theoretic solutions have led to real-world deployments in use by major security agencies like the LAX airport, the US Coast Guard and Federal Air Marshal Service, as well as non-governmental organizations. Among all these research and applications, equilibrium computation serves as a foundation. This paper examines security games from a theoretical perspective and provides a unified view of various security game models. In particular, each security game can be characterized by a set system E which consists of the defender's pure strategies; The defender's best response problem can be viewed as a combinatorial optimization problem over E. Our framework captures most of the basic security game models in the literature, including all the deployed systems; The set system E arising from various domains encodes standard combinatorial problems like bipartite matching, maximum coverage, min-cost flow, packing problems, etc. Our main result shows that equilibrium computation in security games is essentially a combinatorial problem. In particular, we prove that, for any set system \$E\$, the following problems can be reduced to each other in polynomial time: (0) combinatorial optimization over E; (1) computing the minimax equilibrium for zero-sum security games over E; (2) computing the strong Stackelberg equilibrium for security games over E; (3) computing the best or worst (for the defender) Nash equilibrium for security games over E. Therefore, the hardness [polynomial solvability] of any of these problems implies the hardness [polynomial solvability] of all the others. Here, by "games over E" we mean the class of security games with arbitrary payoff structures, but a fixed set E of defender pure strategies. This shows that the complexity of a security game is essentially determined by the set system E. We view drawing these connections as an important conceptual contribution of this paper.
Social animals as found in fish schools, bird flocks, bee hives, and ant colonies are able to solve highly complex problems in nature. This includes foraging for food, constructing astonishingly complex nests, and evading or defending against predators. Remarkably, these animals in many cases use very simple, decentralized communication mechanisms that do not require a single leader. This makes the animals perform surprisingly well, even in dynamically changing environments. The collective intelligence of such animals is known as swarm intelligence and it has inspired popular and very powerful optimization paradigms, including ant colony optimization (ACO) and particle swarm optimization (PSO). The reasons behind their success are often elusive. We are just beginning to understand when and why swarm intelligence algorithms perform well, and how to use swarm intelligence most effectively. Understanding the fundamental working principles that determine their efficiency is a major challenge. This tutorial will give a comprehensive overview of recent theoretical results on swarm intelligence algorithms, with an emphasis on their efficiency (runtime/computational complexity). In particular, the tutorial will show how techniques for the analysis of evolutionary algorithms can be used to analyze swarm intelligence algorithms and how the performance of swarm intelligence algorithms compares to that of evolutionary algorithms. The results shed light on the working principles of swarm intelligence algorithms, identify the impact of parameters and other design choices on performance, and thus help to use swarm intelligence more effectively. The tutorial will be divided into a first, larger part on ACO and a second, smaller part on PSO. For ACO we will consider simple variants of the MAX-MIN ant system. Investigations of example functions in pseudo-Boolean optimization demonstrate that the choices of the pheromone update strategy and the evaporation rate have a drastic impact on the running time. We further consider the performance of ACO on illustrative problems from combinatorial optimization: constructing minimum spanning trees, solving shortest path problems with and without noise, and finding short tours for the TSP. For particle swarm optimization, the tutorial will cover results on PSO for pseudo-Boolean optimization as well as a discussion of theoretical results in continuous spaces.
Resiliency is a relatively new topic in the context of access control. Informally, it refers to the extent to which a multi-user computer system, subject to an authorization policy, is able to continue functioning if a number of authorized users are unavailable. Several interesting problems connected to resiliency were introduced by Li, Wang and Tripunitara [13], many of which were found to be intractable. In this paper, we show that these resiliency problems have unexpected connections with the workflow satisfiability problem (WSP). In particular, we show that an instance of the resiliency checking problem (RCP) may be reduced to an instance of WSP. We then demonstrate that recent advances in our understanding of WSP enable us to develop fixed-parameter tractable algorithms for RCP. Moreover, these algorithms are likely to be useful in practice, given recent experimental work demonstrating the advantages of bespoke algorithms to solve WSP. We also generalize RCP in several different ways, showing in each case how to adapt the reduction to WSP. Li et al also showed that the coexistence of resiliency policies and static separation-of-duty policies gives rise to further interesting questions. We show how our reduction of RCP to WSP may be extended to solve these problems as well and establish that they are also fixed-parameter tractable.
Radio Frequency Identification (RFID) systems are widely used today because of their low price, usability and being wireless. As RFID systems use wireless communication, they may encounter challenging security problems. Several lightweight encryption algorithms have been proposed so far to solve these problems. The RBS block cipher is one of these algorithms. In designing RBS, conventional block cipher elements such as S-box and P-box are not used. RBS is based on inserting redundant bits between altered plaintext bits using an encryption key Kenc. In this paper, considering not having a proper diffusion as the main defect of RBS, we propose a chosen ciphertext attack against this algorithm. The data complexity of this attack equals to N pairs of text and its time complexity equals to N decryptions, where N is the size of the encryption key Kenc.
This paper considers the physical layer security for the cluster-based cooperative wireless sensor networks (WSNs), where each node is equipped with a single antenna and sensor nodes cooperate at each cluster of the network to form a virtual multi-input multi-output (MIMO) communication architecture. We propose a joint cooperative beamforming and jamming scheme to enhance the security of the WSNs where a part of sensor nodes in Alice's cluster are deployed to transmit beamforming signals to Bob while a part of sensor nodes in Bob's cluster are utilized to jam Eve with artificial noise. The optimization of beamforming and jamming vectors to minimize total energy consumption satisfying the quality-of-service (QoS) constraints is a NP-hard problem. Fortunately, through reformulation, the problem is proved to be a quadratically constrained quadratic problem (QCQP) which can be solved by solving constraint integer programs (SCIP) algorithm. Finally, we give the simulation results of our proposed scheme.
Feature selection is an important step in data analysis to address the curse of dimensionality. Such dimensionality reduction techniques are particularly important when if a classification is required and the model scales in polynomial time with the size of the feature (e.g., some applications include genomics, life sciences, cyber-security, etc.). Feature selection is the process of finding the minimum subset of features that allows for the maximum predictive power. Many of the state-of-the-art information-theoretic feature selection approaches use a greedy forward search; however, there are concerns with the search in regards to the efficiency and optimality. A unified framework was recently presented for information-theoretic feature selection that tied together many of the works in over the past twenty years. The work showed that joint mutual information maximization (JMI) is generally the best options; however, the complexity of greedy search for JMI scales quadratically and it is infeasible on high dimensional datasets. In this contribution, we propose a fast approximation of JMI based on information theory. Our approach takes advantage of decomposing the calculations within JMI to speed up a typical greedy search. We benchmarked the proposed approach against JMI on several UCI datasets, and we demonstrate that the proposed approach returns feature sets that are highly consistent with JMI, while decreasing the run time required to perform feature selection.
A wireless sensor network (WSN) is composed of sensor nodes and a base station. In WSNs, constructing an efficient key-sharing scheme to ensure a secure communication is important. In this paper, we propose a new key-sharing scheme for groups, which shares a group key in a single broadcast without being dependent on the number of nodes. This scheme is based on geometric characteristics and has information-theoretic security in the analysis of transmitted data. We compared our scheme with conventional schemes in terms of communication traffic, computational complexity, flexibility, and security, and the results showed that our scheme is suitable for an Internet-of-Things (IoT) network.
In the era of Big Data, software systems can be affected by its growing complexity, both with respect to functional and non-functional requirements. As more and more people use software applications over the web, the ability to recognize if some of this traffic is malicious or legitimate is a challenge. The traffic load of security controllers, as well as the complexity of security rules to detect attacks can grow to levels where current solutions may not suffice. In this work, we propose a hierarchical distributed architecture for security control in order to partition responsibility and workload among many security controllers. In addition, our architecture proposes a more simplified way of defining security rules to allow security to be enforced on an operational level, rather than a development level.
We propose secure RAID, i.e., low-complexity schemes to store information in a distributed manner that is resilient to node failures and resistant to node eavesdropping. We generalize the concept of systematic encoding to secure RAID and show that systematic schemes have significant advantages in the efficiencies of encoding, decoding and random access. For the practical high rate regime, we construct three XOR-based systematic secure RAID schemes with optimal encoding and decoding complexities, from the EVENODD codes and B codes, which are array codes widely used in the RAID architecture. These schemes optimally tolerate two node failures and two eavesdropping nodes. For more general parameters, we construct efficient systematic secure RAID schemes from Reed-Solomon codes. Our results suggest that building “keyless”, information-theoretic security into the RAID architecture is practical.
Volume of digital data is increasing at a faster rate and the security of the data is at risk while being transit on a network as well as at rest. The execution time of full disk encryption in large servers is significant because of the computational complexity associated with disk encryption. Hence it is necessary to reduce the execution time of full disk encryption from the application point of view. In this work a full disk encryption algorithm namely EME2 AES (Encrypt Mix Encrypt V2 Advanced Encryption Standard) is analyzed. The execution speed of this algorithm is reduced by means of multicore compatible parallel implementation which makes use of available cores. Parallel implementation is executed on a multicore machine with 8 cores and speed up on the multicore implementation is measured. Results show that the multicore implementation of EME2 AES using OpenMP is up to 2.85 times faster than sequential execution for the chosen infrastructure and data range.