Biblio
Biometrics is attracting increasing attention in privacy and security concerned issues, such as access control and remote financial transaction. However, advanced forgery and spoofing techniques are threatening the reliability of conventional biometric modalities. This has been motivating our investigation of a novel yet promising modality transient evoked otoacoustic emission (TEOAE), which is an acoustic response generated from cochlea after a click stimulus. Unlike conventional modalities that are easily accessible or captured, TEOAE is naturally immune to replay and falsification attacks as a physiological outcome from human auditory system. In this paper, we resort to wavelet analysis to derive the time-frequency representation of such nonstationary signal, which reveals individual uniqueness and long-term reproducibility. A machine learning technique linear discriminant analysis is subsequently utilized to reduce intrasubject variability and further capture intersubject differentiation features. Considering practical application, we also introduce a complete framework of the biometric system in both verification and identification modes. Comparative experiments on a TEOAE data set of biometric setting show the merits of the proposed method. Performance is further improved with fusion of information from both ears.
Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper's contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.
The advanced encryption standard (AES) has been sufficiently studied to confirm that its decryption is computationally impossible. However, its vulnerability against fault analysis attacks has been pointed out in recent years. To verify the vulnerability of electronic devices in the future, into which cryptographic circuits have been incorporated, fault Analysis attacks must be thoroughly studied. The present study proposes a new fault analysis attack method which utilizes the tendency of an operation error due to a glitch. The present study also verifies the validity of the proposed method by performing evaluation experiments using FPGA.
In this paper we propose a methodology and a prototype tool to evaluate web application security mechanisms. The methodology is based on the idea that injecting realistic vulnerabilities in a web application and attacking them automatically can be used to support the assessment of existing security mechanisms and tools in custom setup scenarios. To provide true to life results, the proposed vulnerability and attack injection methodology relies on the study of a large number of vulnerabilities in real web applications. In addition to the generic methodology, the paper describes the implementation of the Vulnerability & Attack Injector Tool (VAIT) that allows the automation of the entire process. We used this tool to run a set of experiments that demonstrate the feasibility and the effectiveness of the proposed methodology. The experiments include the evaluation of coverage and false positives of an intrusion detection system for SQL Injection attacks and the assessment of the effectiveness of two top commercial web application vulnerability scanners. Results show that the injection of vulnerabilities and attacks is indeed an effective way to evaluate security mechanisms and to point out not only their weaknesses but also ways for their improvement.
Demand Response (DR) is a promising technology for meeting the world's ever increasing energy demands without corresponding increase in energy generation, and for providing a sustainable alternative for integrating renewables into the power grid. As a result, interest in automated DR is increasing globally and has led to the development of OpenADR, an internationally recognized standard. In this paper, we propose security-enhancement mechanisms to provide DR participants with verifiable information that they can use to make informed decisions about the validity of received DR event information.
In this paper, we propose a decomposition based multiobjective evolutionary algorithm that extracts information from an external archive to guide the evolutionary search for continuous optimization problem. The proposed algorithm used a mechanism to identify the promising regions(subproblems) through learning information from the external archive to guide evolutionary search process. In order to demonstrate the performance of the algorithm, we conduct experiments to compare it with other decomposition based approaches. The results validate that our proposed algorithm is very competitive.
In Eurocrypt 2011, Obana proposed a (k, n) secret-sharing scheme that can identify up to ⌊((k− 2)/2)⌋ cheaters. The number of cheaters that this scheme can identify meets its upper bound. When the number of cheaters t satisfies t≤ ⌊((k− 1)/3)⌋, this scheme is extremely efficient since the size of share |Vi| can be written as |Vi| = |S|/ɛ, which almost meets its lower bound, where |S| denotes the size of secret and ε denotes the successful cheating probability; when the number of cheaters t is close to ⌊ ((k− 2)/2)⌋, the size of share is upper bounded by |Vi| = (n·(t + 1) · 2 |S|)/ɛ. A new (k, n) secret-sharing scheme capable of identifying ⌊((k − 2)/2)⌋ cheaters is presented in this study. Considering the general case that k shareholders are involved in secret reconstruction, the size of share of the proposed scheme is |Vi| = (2 |S| )/ɛ, which is independent of the parameters t and n. On the other hand, the size of share in Obana’s scheme can be rewritten as |Vi | = (n · (t + 1) · 2 |S|)/ɛ under the same condition. With respect to the size of share, the proposed scheme is more efficient than previous one when the number of cheaters t is close to ⌊((k− 2)/2)⌋.
Experience influences actions people take in protecting themselves against phishing. One way to measure experience is through mental models. Mental models are internal representations of a concept or system that develop with experience. By rating pairs of concepts on the strength of their relationship, networks can be created through Pathfinder, showing an in-depth analysis of how information is organized. Researchers had novice and expert computer users rate three sets of terms related to phishing. The terms were divided into three categories: prevention of phishing, trends and characteristics of phishing attacks, and the consequences of phishing. Results indicated that expert mental models were more complex with more links between concepts. Specifically, experts had sixteen, thirteen, and fifteen links in the networks describing the prevention, trends, and consequences of phishing, respectively; however, novices only had eleven, nine, and nine links in the networks describing prevention, trends, and consequences of phishing, respectively. These preliminary results provide quantifiable network displays of mental models of novices and experts that cannot be seen through interviews. This information could provide a basis for future research on how mental models could be used to determine phishing vulnerability and the effectiveness of phishing training.
We investigate the coverage efficiency of a sensor network consisting of sensors with circular sensing footprints of different radii. The objective is to completely cover a region in an efficient manner through a controlled (or deterministic) deployment of such sensors. In particular, it is shown that when sensing nodes of two different radii are used for complete coverage, the coverage density is increased, and the sensing cost is significantly reduced as compared to the homogeneous case, in which all nodes have the same sensing radius. Configurations of heterogeneous disks of multiple radii to achieve efficient circle coverings are presented and analyzed.
A precise characterization is given for the class of security policies enforceable with mechanisms that work by monitoring system execution, and automata are introduced for specifying exactly that class of security policies. Techniques to enforce security policies specified by such automata are also discussed.
This article was identified by the SoS Best Scientific Cybersecurity Paper Competition Distinguished Experts as a Science of Security Significant Paper. The Science of Security Paper Competition was developed to recognize and honor recently published papers that advance the science of cybersecurity. During the development of the competition, members of the Distinguished Experts group suggested that listing papers that made outstanding contributions, empirical or theoretical, to the science of cybersecurity in earlier years would also benefit the research community.
Typing is a human activity that can be affected by a number of situational and task-specific factors. Changes in typing behavior resulting from the manipulation of such factors can be predictably observed through key-level input analytics. Here we present a study designed to explore these relationships. Participants play a typing game in which letter composition, word length and number of words appearing together are varied across levels. Inter-keystroke timings and other higher order statistics (such as bursts and pauses), as well as typing strategies, are analyzed from game logs to find the best set of metrics that quantify the effect that different experimental factors have on observable metrics. Beyond task-specific factors, we also study the effects of habituation by recording changes in performance with practice. Currently a work in progress, this research aims at developing a predictive model of human typing. We believe this insight can lead to the development of novel security proofs for interactive systems that can be deployed on existing infrastructure with minimal overhead. Possible applications of such predictive capabilities include anomalous behavior detection, authentication using typing signatures, bot detection using word challenges etc.
In this study, we present a control theoretic technique to model routing in wireless multihop networks. We model ad hoc wireless networks as stochastic dynamical systems where, as a base case, a centralized controller pre-computes optimal paths to the destination. The usefulness of this approach lies in the fact that it can help obtain bounds on reliability of end-to-end packet transmissions. We compare this approach with the reliability achieved by some of the widely used routing techniques in multihop networks.
Trust is a necessary component in cybersecurity. It is a common task for a system to make a decision about whether or not to trust the credential of an entity from another domain, issued by a third party. Generally, in the cyberspace, connected and interacting systems largely rely on each other with respect to security, privacy, and performance. In their interactions, one entity or system needs to trust others, and this "trust" frequently becomes a vulnerability of that system. Aiming at mitigating the vulnerability, we are developing a computational theory of trust, as a part of our efforts towards Science of Security. Previously, we developed a formal-semantics-based calculus of trust [3, 2], in which trust can be calculated based on a trustor's direct observation on the performance of the trustee, or based on a trust network. In this paper, we construct a framework for making trust reasoning based on the observed evidence. We take privacy in cloud computing as a driving application case [5].
We argue that emergent behavior is inherent to cybersecurity.