Biblio
Biometrics is attracting increasing attention in privacy and security concerned issues, such as access control and remote financial transaction. However, advanced forgery and spoofing techniques are threatening the reliability of conventional biometric modalities. This has been motivating our investigation of a novel yet promising modality transient evoked otoacoustic emission (TEOAE), which is an acoustic response generated from cochlea after a click stimulus. Unlike conventional modalities that are easily accessible or captured, TEOAE is naturally immune to replay and falsification attacks as a physiological outcome from human auditory system. In this paper, we resort to wavelet analysis to derive the time-frequency representation of such nonstationary signal, which reveals individual uniqueness and long-term reproducibility. A machine learning technique linear discriminant analysis is subsequently utilized to reduce intrasubject variability and further capture intersubject differentiation features. Considering practical application, we also introduce a complete framework of the biometric system in both verification and identification modes. Comparative experiments on a TEOAE data set of biometric setting show the merits of the proposed method. Performance is further improved with fusion of information from both ears.
In this paper, an edit detection method for forensic audio analysis is proposed. It develops and improves a previous method through changes in the signal processing chain and a novel detection criterion. As with the original method, electrical network frequency (ENF) analysis is central to the novel edit detector, for it allows monitoring anomalous variations of the ENF related to audio edit events. Working in unsupervised manner, the edit detector compares the extent of ENF variations, centered at its nominal frequency, with a variable threshold that defines the upper limit for normal variations observed in unedited signals. The ENF variations caused by edits in the signal are likely to exceed the threshold providing a mechanism for their detection. The proposed method is evaluated in both qualitative and quantitative terms via two distinct annotated databases. Results are reported for originally noisy database signals as well as versions of them further degraded under controlled conditions. A comparative performance evaluation, in terms of equal error rate (EER) detection, reveals that, for one of the tested databases, an improvement from 7% to 4% EER is achieved, respectively, from the original to the new edit detection method. When the signals are amplitude clipped or corrupted by broadband background noise, the performance figures of the novel method follow the same profile of those of the original method.
Port hopping is a typical moving target defense, which constantly changes service port number to thwart reconnaissance attack. It is effective in hiding service identities and confusing potential attackers, but it is still unknown how effective port hopping is and under what circumstances it is a viable proactive defense because the existed works are limited and they usually discuss only a few parameters and give some empirical studies. This paper introduces urn model and quantifies the likelihood of attacker success in terms of the port pool size, number of probes, number of vulnerable services, and hopping frequency. Theoretical analysis shows that port hopping is an effective and promising proactive defense technology in thwarting network attacks.
Insect-Scale Flapping-Wing Micro-Air Vehicles (FW-MAVs), can be particularly sensitive to control deficits caused by ongoing wing damage and degradation. Since any such degradation could occur during flight and likely in ways difficult to predict apriori, any automated methods to apply correction would also need to be applied in-flight. Previous work has demonstrated effective recovery of correct flight behavior via online (in service) evolutionary algorithm based learning of new wing-level oscillation patterns. In those works, Evolutionary Algorithms (EAs) were used to continuously adapt wing motion patterns to restore the force generation expected by the flight controller. Due to the requirements for online learning and fast recovery of correct flight behavior, the choice of EA is critical. The work described in this paper replaces previously used oscillator learning algorithms with an Evolution Strategy (ES), an EA variant never previously tested for this application. This paper will demonstrate that this approach is both more effective and faster than previously employed methods. The paper will conclude with a discussion of future applications of the technique within this problem domain.
One of the challenges in a distributed data infrastructure is how users authenticate to the infrastructure, and how their authorisations are tracked. Each user community comes with its own established practices, all different, and users are put off if they need to use new, difficult tools. From the perspective of the infrastructure project, the level of assurance must be high enough, and it should not be necessary to reimplement an authentication and authorisation infrastructure (AAI). In the EUDAT project, we chose to implement a mostly loosely coupled approach based on the outcome of the Contrail and Unicore projects. We have preferred a practical approach, combining the outcome of several projects who have contributed parts of the puzzle. The present paper aims to describe the experiences with the integration of these parts. Eventually, we aim to have a full framework which will enable us to easily integrate new user communities and new services.
Evolutionary Computation has been suggested as a means of providing ongoing adaptation of robot controllers. Most often, using Evolutionary Computation to that end focuses on recovery of acceptable robot performance with less attention given to diagnosing the nature of the failure that necessitated the adaptation. In previous work, we introduced the concept of Evolutionary Model Consistency Checking in which candidate robot controller evaluations were dual-purposed for both evolving control solutions and extracting robot fault diagnoses. In that less developed work, we could only detect single wing damage faults in a simulated Flapping Wing Micro Air Vehicle. We now extend the method to enable detection and diagnosis of both single wing and dual wing faults. This paper explains those extensions, demonstrates their efficacy via simulation studies, and provides discussion on the possibility of augmenting EC adaptation by exploiting extracted fault diagnoses to speed EC search.
On-going effective control of insect-scale Flapping-Wing Micro Air Vehicles could be significantly advantaged by active in-flight control adaptation. Previous work demonstrated that in simulated vehicles with wing membrane damage, in-flight recovery of effective vehicle attitude and vehicle position control precision via use of an in-flight adaptive learning oscillator was possible. A significant portion of the most recent approaches to this problem employed an island-of-fitness compact genetic algorithm (ICGA) for oscillator learning. The work presented in this paper provides the details of a domain specific search space reduction approach implemented with existing ICGA and its effect on the in-flight learning time. Further, it will be demonstrated that the proposed search space reduction methodology is effective in producing an error correcting oscillator configuration rapidly, online, while the vehicle is in normal service. The paper will present specific simulation results demonstrating the value of the search space reduction and discussion of future applications of the technique to this problem domain.
The huge amount of user log data collected by search engine providers creates new opportunities to understand user loyalty and defection behavior at an unprecedented scale. However, this also poses a great challenge to analyze the behavior and glean insights into the complex, large data. In this paper, we introduce LoyalTracker, a visual analytics system to track user loyalty and switching behavior towards multiple search engines from the vast amount of user log data. We propose a new interactive visualization technique (flow view) based on a flow metaphor, which conveys a proper visual summary of the dynamics of user loyalty of thousands of users over time. Two other visualization techniques, a density map and a word cloud, are integrated to enable analysts to gain further insights into the patterns identified by the flow view. Case studies and the interview with domain experts are conducted to demonstrate the usefulness of our technique in understanding user loyalty and switching behavior in search engines.
Cloud computing is widely deployed to handle challenges such as big data processing and storage. Due to the outsourcing and sharing feature of cloud computing, security is one of the main concerns that hinders the end users to shift their businesses to the cloud. A lot of cryptographic techniques have been proposed to alleviate the data security issues in cloud computing, but most of these works focus on solving a specific security problem such as data sharing, comparison, searching, etc. At the same time, little efforts have been done on program security and formalization of the security requirements in the context of cloud computing. We propose a formal definition of the security of cloud computing, which captures the essence of the security requirements of both data and program. Analysis of some existing technologies under the proposed definition shows the effectiveness of the definition. We also give a simple look-up table based solution for secure cloud computing which satisfies the given definition. As FPGA uses look-up table as its main computation component, it is a suitable hardware platform for the proposed secure cloud computing scheme. So we use FPGAs to implement the proposed solution for k-means clustering algorithm, which shows the effectiveness of the proposed solution.
In this paper, we present an open cloud DRM service provider to protect the digital content's copyright. The proposed architecture enables the service providers to use an on-the fly DRM technique with digital signature and symmetric-key encryption. Unlike other similar works, our system does not keep the encrypted digital content but lets the content creators do so in their own cloud storage. Moreover, the key used for symmetric encryption are managed in an extremely secure way by means of the key fission engine and the key fusion engine. The ideas behind the two engines are taken from the works in secure network coding and secret sharing. Although the use of secret sharing and secure network coding for the storage of digital content is proposed in some other works, this paper is the first one employing those ideas only for key management while letting the content be stored in the owner's cloud storage. In addition, we implement an Android SDK for e-Book readers to be compatible with our proposed open cloud DRM service provider. The experimental results demonstrate that our proposal is feasible for the real e-Book market, especially for individual businesses.
Multi-touch attribution, which allows distributing the credit to all related advertisements based on their corresponding contributions, has recently become an important research topic in digital advertising. Traditionally, rule-based attribution models have been used in practice. The drawback of such rule-based models lies in the fact that the rules are not derived form the data but only based on simple intuition. With the ever enhanced capability to tracking advertisement and users' interaction with the advertisement, data-driven multi-touch attribution models, which attempt to infer the contribution from user interaction data, become an important research direction. We here propose a new data-driven attribution model based on survival theory. By adopting a probabilistic framework, one key advantage of the proposed model is that it is able to remove the presentation biases inherit to most of the other attribution models. In addition to model the attribution, the proposed model is also able to predict user's 'conversion' probability. We validate the proposed method with a real-world data set obtained from a operational commercial advertising monitoring company. Experiment results have shown that the proposed method is quite promising in both conversion prediction and attribution.
We consider the problem of communicating information over a network secretly and reliably in the presence of a hidden adversary who can eavesdrop and inject malicious errors. We provide polynomial-time distributed network codes that are information-theoretically rate-optimal for this scenario, improving on the rates achievable in prior work by Ngai Our main contribution shows that as long as the sum of the number of links the adversary can jam (denoted by ZO) and the number of links he can eavesdrop on (denoted by ZI) is less than the network capacity (denoted by C) (i.e., ), our codes can communicate (with vanishingly small error probability) a single bit correctly and without leaking any information to the adversary. We then use this scheme as a module to design codes that allow communication at the source rate of C- ZO when there are no security requirements, and codes that allow communication at the source rate of C- ZO- ZI while keeping the communicated message provably secret from the adversary. Interior nodes are oblivious to the presence of adversaries and perform random linear network coding; only the source and destination need to be tweaked. We also prove that the rate-region obtained is information-theoretically optimal. In proving our results, we correct an error in prior work by a subset of the authors in this paper.
We consider the problem of communicating information over a network secretly and reliably in the presence of a hidden adversary who can eavesdrop and inject malicious errors. We provide polynomial-time distributed network codes that are information-theoretically rate-optimal for this scenario, improving on the rates achievable in prior work by Ngai Our main contribution shows that as long as the sum of the number of links the adversary can jam (denoted by ZO) and the number of links he can eavesdrop on (denoted by ZI) is less than the network capacity (denoted by C) (i.e., ), our codes can communicate (with vanishingly small error probability) a single bit correctly and without leaking any information to the adversary. We then use this scheme as a module to design codes that allow communication at the source rate of C- ZO when there are no security requirements, and codes that allow communication at the source rate of C- ZO- ZI while keeping the communicated message provably secret from the adversary. Interior nodes are oblivious to the presence of adversaries and perform random linear network coding; only the source and destination need to be tweaked. We also prove that the rate-region obtained is information-theoretically optimal. In proving our results, we correct an error in prior work by a subset of the authors in this paper.
Networked control systems consist of distributed sensors and actuators that communicate via a wireless network. The use of an open wireless medium and unattended deployment leaves these systems vulnerable to intelligent adversaries whose goal is to disrupt the system performance. In this paper, we study the wormhole attack on a networked control system, in which an adversary establishes a link between two geographically distant regions of the network by using either high-gain antennas, as in the out-of-band wormhole, or colluding network nodes as in the in-band wormhole. Wormholes allow the adversary to violate the timing constraints of real-time control systems by first creating low-latency links, which attract network traffic, and then delaying or dropping packets. Since the wormhole attack reroutes and replays valid messages, it cannot be detected using cryptographic mechanisms alone. We study the impact of the wormhole attack on the network flows and delays and introduce a passivity-based control-theoretic framework for modeling and mitigating the wormhole attack. We develop this framework for both the in-band and out-of-band wormhole attacks as well as complex, hereto-unreported wormhole attacks consisting of arbitrary combinations of in-and out-of band wormholes. By integrating existing mitigation strategies into our framework, we analyze the throughput, delay, and stability properties of the overall system. Through simulation study, we show that, by selectively dropping control packets, the wormhole attack can cause disturbances in the physical plant of a networked control system, and demonstrate that appropriate selection of detection parameters mitigates the disturbances due to the wormhole while satisfying the delay constraints of the physical system.
Networked control systems consist of distributed sensors and actuators that communicate via a wireless network. The use of an open wireless medium and unattended deployment leaves these systems vulnerable to intelligent adversaries whose goal is to disrupt the system performance. In this paper, we study the wormhole attack on a networked control system, in which an adversary establishes a link between two geographically distant regions of the network by using either high-gain antennas, as in the out-of-band wormhole, or colluding network nodes as in the in-band wormhole. Wormholes allow the adversary to violate the timing constraints of real-time control systems by first creating low-latency links, which attract network traffic, and then delaying or dropping packets. Since the wormhole attack reroutes and replays valid messages, it cannot be detected using cryptographic mechanisms alone. We study the impact of the wormhole attack on the network flows and delays and introduce a passivity-based control-theoretic framework for modeling and mitigating the wormhole attack. We develop this framework for both the in-band and out-of-band wormhole attacks as well as complex, hereto-unreported wormhole attacks consisting of arbitrary combinations of in-and out-of band wormholes. By integrating existing mitigation strategies into our framework, we analyze the throughput, delay, and stability properties of the overall system. Through simulation study, we show that, by selectively dropping control packets, the wormhole attack can cause disturbances in the physical plant of a networked control system, and demonstrate that appropriate selection of detection parameters mitigates the disturbances due to the wormhole while satisfying the delay constraints of the physical system.
Secure group communication systems have become increasingly important for many emerging network applications. An efficient and robust group key management approach is indispensable to a secure group communication system. Motivated by the theory of hyper-sphere, this paper presents a new group key management approach with a group controller (GC). In our new design, a hyper-sphere is constructed for a group and each member in the group corresponds to a point on the hyper-sphere, which is called the member's private point. The GC computes the central point of the hyper-sphere, intuitively, whose “distance” from each member's private point is identical. The central point is published such that each member can compute a common group key, using a function by taking each member's private point and the central point of the hyper-sphere as the input. This approach is provably secure under the pseudo-random function (PRF) assumption. Compared with other similar schemes, by both theoretical analysis and experiments, our scheme (1) has significantly reduced memory and computation load for each group member; (2) can efficiently deal with massive membership change with only two re-keying messages, i.e., the central point of the hyper-sphere and a random number; and (3) is efficient and very scalable for large-size groups.
In this paper, we propose a remote password authentication scheme based on 3-D geometry with biometric value of a user. It is simple and practically useful and also a legal user can freely choose and change his password using smart card that contains some information. The security of the system depends on the points on the diagonal of a cuboid in 3D environment. Using biometric value makes the points more secure because the characteristics of the body parts cannot be copied or stolen.
The modern society increasingly relies on electrical service, which also brings risks of catastrophic consequences, e.g., large-scale blackouts. In the current literature, researchers reveal the vulnerability of power grids under the assumption that substations/transmission lines are removed or attacked synchronously. In reality, however, it is highly possible that such removals can be conducted sequentially. Motivated by this idea, we discover a new attack scenario, called the sequential attack, which assumes that substations/transmission lines can be removed sequentially, not synchronously. In particular, we find that the sequential attack can discover many combinations of substation whose failures can cause large blackout size. Previously, these combinations are ignored by the synchronous attack. In addition, we propose a new metric, called the sequential attack graph (SAG), and a practical attack strategy based on SAG. In simulations, we adopt three test benchmarks and five comparison schemes. Referring to simulation results and complexity analysis, we find that the proposed scheme has strong performance and low complexity.
Enhanced situational awareness is integral to risk management and response evaluation. Dynamic systems that incorporate both hard and soft data sources allow for comprehensive situational frameworks which can supplement physical models with conceptual notions of risk. The processing of widely available semi-structured textual data sources can produce soft information that is readily consumable by such a framework. In this paper, we augment the situational awareness capabilities of a recently proposed risk management framework (RMF) with the incorporation of soft data. We illustrate the beneficial role of the hard-soft data fusion in the characterization and evaluation of potential vessels in distress within Maritime Domain Awareness (MDA) scenarios. Risk features pertaining to maritime vessels are defined a priori and then quantified in real time using both hard (e.g., Automatic Identification System, Douglas Sea Scale) as well as soft (e.g., historical records of worldwide maritime incidents) data sources. A risk-aware metric to quantify the effectiveness of the hard-soft fusion process is also proposed. Though illustrated with MDA scenarios, the proposed hard-soft fusion methodology within the RMF can be readily applied to other domains.
Moving target defense is an area of network security research in which machines are moved logically around a network in order to avoid detection. This is done by leveraging the immense size of the IPv6 address space and the statistical improbability of two machines selecting the same IPv6 address. This defensive technique forces a malicious actor to focus on the reconnaissance phase of their attack rather than focusing only on finding holes in a machine's static defenses. We have a current implementation of an IPv6 moving target defense entitled MT6D, which works well although is limited to functioning in a peer to peer scenario. As we push our research forward into client server networks, we must discover what the limits are in reference to the client server ratio. In our current implementation of a simple UDP echo server that binds large numbers of IPv6 addresses to the ethernet interface, we discover limits in both the number of addresses that we can successfully bind to an interface and the speed at which UDP requests can be successfully handled across a large number of bound interfaces.
For the first time in the history of humanity, more them half of the population is now living in big cities. This scenario has raised concerns related systems that provide basic services to citizens. Even more, those systems has now the responsibility to empower the citizen with information and values that may aid people on daily decisions, such as related to education, transport, healthy and others. This environment creates a set of services that, interconnected, can develop a brand new range of solutions that refers to a term often called System of Systems. In this matter, focusing in a smart city, new challenges related to information security raises, those concerns may go beyond the concept of privacy issues exploring situations where the entire environment could be affected by issues different them only break the confidentiality of a data. This paper intends to discuss and propose 9 security issues that can be part of a smart city environment, and that explores more them just citizens privacy violations.
Mobile users access location services from a location based server. While doing so, the user's privacy is at risk. The server has access to all details about the user. Example the recently visited places, the type of information he accesses. We have presented synergetic technique to safeguard location privacy of users accessing location-based services via mobile devices. Mobile devices have a capability to form ad-hoc networks to hide a user's identity and position. The user who requires the service is the query originator and who requests the service on behalf of query originator is the query sender. The query originator selects the query sender with equal probability which leads to anonymity in the network. The location revealed to the location service provider is a rectangle instead of exact co-ordinate. In this paper we have simulated the mobile network and shown the results for cloaking area sizes and performance against the variation in the density of users.
As any veteran of the editor wars can attest, Unix users can be fiercely and irrationally attached to the commands they use and the manner in which they use them. In this work, we investigate the problem of identifying users out of a large set of candidates (25-97) through their command-line histories. Using standard algorithms and feature sets inspired by natural language authorship attribution literature, we demonstrate conclusively that individual users can be identified with a high degree of accuracy through their command-line behavior. Further, we report on the best performing feature combinations, from the many thousands that are possible, both in terms of accuracy and generality. We validate our work by experimenting on three user corpora comprising data gathered over three decades at three distinct locations. These are the Greenberg user profile corpus (168 users), Schonlau masquerading corpus (50 users) and Cal Poly command history corpus (97 users). The first two are well known corpora published in 1991 and 2001 respectively. The last is developed by the authors in a year-long study in 2014 and represents the most recent corpus of its kind. For a 50 user configuration, we find feature sets that can successfully identify users with over 90% accuracy on the Cal Poly, Greenberg and one variant of the Schonlau corpus, and over 87% on the other Schonlau variant.