Effectiveness and Work Factor Metrics
SoS Newsletter- Advanced Book Block
Effectiveness and Work Factor Metrics It is difficult to measure the relative strengths and weaknesses of modern information systems when the safety, security, and reliability of those systems must be protected. Developers often apply security to systems without the ability to evaluate the impact of those mechanisms to the overall system. Few efforts are directed at actually measuring the quantifiable impact of information assurance technology on the potential adversary. The research cited here describes analytic tools, methods and processes for measuring and evaluating software, networks, and authentication.
- Frank L. Greitzer, Thomas A. Ferryman "Methods and Metrics for Evaluating Analytic Insider Threat Tools" SPW '13 Proceedings of the 2013 IEEE Security and Privacy Workshops, May 2013. (Pages 90-97). (ID#:14-1384) Available at: http://dl.acm.org/citation.cfm?id=2510662.2511480&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://dx.doi.org/10.1109/SPW.2013.34 The insider threat is a prime security concern for government and industry organizations. As insider threat programs come into operational practice, there is a continuing need to assess the effectiveness of tools, methods, and data sources, which enables continual process improvement. This is particularly challenging in operational environments, where the actual number of malicious insiders in a study sample is not known. The present paper addresses the design of evaluation strategies and associated measures of effectiveness; several quantitative/statistical significance test approaches are described with examples, and a new measure, the Enrichment Ratio, is proposed and described as a means of assessing the impact of proposed tools on the organization's operations. Keywords: insider threat, evaluation, validation, metrics, assessment
- Inuma, M.; Otsuka, A., "Relations Among Security Metrics For Template Protection Algorithms," Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on , vol., no., pp.1,8, Sept. 29 2013-Oct. 2 2013. (ID#:14-1385) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6712714&isnumber=6712682 This paper gives the formal definitions for security metrics proposed by Simoens et al.[11] and Nagar et al.[1], and analyze the relations. [11] defined comprehensive metrics for all biometric template protection algorithms. Their security-related metrics are defined as the measurement of performance in various setting. Whereas [1] also defined similar but different performance-based metrics with a focus on the two-factor authentication scenario. One problem of performance-based metrics is its ambiguous relation with the security goals often discussed in the context of biometric cryptosystems[7]. The objective of this paper is to complement the previous work by Simoens et al.[11] in two points: (1) it gives formal definitions for metrics defined in [11] in order to make it applicable to biometric cryptosystems, and (2) it covers all security metrics for every variation in the two-factor authentication scenario, namely both or either of the key and/or the protected template is given to adversary. Keywords: biometrics (access control); cryptography; biometric cryptosystems; biometric template protection algorithm; performance-based metrics; security metrics; two-factor authentication scenario; Accuracy; Authentication; Databases; Feature extraction; Games; Measurement
- Srivastava, S.; Kumar, R., "Indirect Method To Measure Software Quality Using CK-OO Suite," Intelligent Systems and Signal Processing (ISSP), 2013 International Conference on , vol., no., pp.47,51, 1-2 March 2013. (ID#:14-1386) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6526872&isnumber=6526858 In this paper, we consider experiential evidences in support of a set of object-oriented software metrics. In particular, we look at the object oriented design metrics of Chidamber and Kemerer, and their applicability in different application domains. Many of the early quality models have followed an approach, in which a set of factors that influence quality and relationships between different quality factors, are defined, with little scope of measurement. But the measurement plays an important role in every phase of software development process. The work, therefore, emphasizes on quantitative measurement of different quality attributes such as reusability, maintainability, testability, reliability and efficiency. With the widespread use of Object Oriented Technologies, CK metrics have proved to be very useful. So we have used CK metrics for measurement of these qualities attributes. The quality attributes are affected by values of CK metrics. We have derived linearly related equations from CK metrics to measure these quality attributes. Different concepts about software quality characteristics are reviewed and discussed in the Dissertation. We briefly describe the metrics, and present our empirical findings, arising from our analysis of systems taken from a number of different application domains. Our investigations have led us to conclude that a subset of the metrics can be of great value to software developers, maintainers and project managers. We have also taken an empirical study in Object Oriented language C++. Keywords: C++ language; object-oriented methods; program testing; software maintenance; software metrics; software quality; software reliability; software reusability; C++;CK metrics; CK-OO suite; maintainability; object oriented design metrics; object oriented language; object oriented technologies; object-oriented software metrics; reliability; reusability; software development process; software quality ;testability; S/W Quality; S/W measurement; matrices
- Zhu, Qi; Deng, Peng; Di Natale, Marco; Zeng, Haibo, "Robust And Extensible Task Implementations Of Synchronous Finite State Machines," Design, Automation & Test in Europe Conference & Exhibition (DATE), 2013 , vol., no., pp.1319,1324, 18-22 March 2013. (ID#:14-1387) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6513718&isnumber=6513446 Model-based design using synchronous reactive (SR) models is widespread for the development of embedded control software. SR models ease verification and validation, and enable the automatic generation of implementations. In SR models, synchronous finite state machines (FSMs) are commonly used to capture changes of the system state under trigger events. The implementation of a synchronous FSM may be improved by using multiple software tasks instead of the traditional single-task solution. In this work, we propose methods to quantitatively analyze task implementations with respect to a breakdown factor that measures the timing robustness, and an action extensibility metric that measures the capability to accommodate upgrades. We propose an algorithm to generate a correct and efficient task implementation of synchronous FSMs for these two metrics, while guaranteeing the schedulability constraints.
- Barrows, Clayton; Blumsack, Seth; Bent, Russell, "Using Network Metrics to Achieve Computationally Efficient Optimal Transmission Switching," System Sciences (HICSS), 2013 46th Hawaii International Conference on , vol., no., pp.2187,2196, 7-10 Jan. 2013. (ID#:14-1388) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6480106&isnumber=6479821 Recent studies have shown that dynamic removal of transmission lines from operation ("Transmission Switching") can reduce costs associated with power system operation. Smart Grid systems introduce flexibility into the transmission network topology and enable co-optimization of generation and network topology. The optimal transmission switching (OTS) problem has been posed in on small test systems, but problem complexity and large system sizes make OTS intractable. Our previous work suggests that most economic benefits of OTS arise through switching a small number of lines, so pre-screening has the potential to produce good solutions in less time. We explore the use of topological and electrical graph metrics to increase solution speed via solution space reduction. We find that screening based on line outage distribution factors outperforms other methods. When compared to un-screened OTS on the RTS-96 and IEEE 118-Bus networks, the sensitivity-based screen generates near optimal solutions in a fraction of the time. Keywords: Generators; Load flow; Network topology; Power system reliability; Power transmission lines; Security; Switches; Optimization; Power Systems; Smart Grid; Transmission Switching
- Gul Calikli, Ayse Basar Bener. "Influence Of Confirmation Biases Of Developers On Software Quality: An Empirical Study" Journal of Software Quality Control. Volume 21 Issue 2, June 2013 (Pages 377-416). (ID#:14-1389) Available at: http://dl.acm.org/citation.cfm?id=2458014.2458019&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://dx.doi.org/10.1007/s11219-012-9180-0 The thought processes of people have a significant impact on software quality, as software is designed, developed and tested by people. Cognitive biases, which are defined as patterned deviations of human thought from the laws of logic and mathematics, are a likely cause of software defects. However, there is little empirical evidence to date to substantiate this assertion. In this research, we focus on a specific cognitive bias, confirmation bias, which is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify a hypothesis. Due to this confirmation bias, developers tend to perform unit tests to make their program work rather than to break their code. Therefore, confirmation bias is believed to be one of the factors that lead to an increased software defect density. In this research, we present a metric scheme that explores the impact of developers' confirmation bias on software defect density. In order to estimate the effectiveness of our metric scheme in the quantification of confirmation bias within the context of software development, we performed an empirical study that addressed the prediction of the defective parts of software. In our empirical study, we used confirmation bias metrics on five datasets obtained from two companies. Our results provide empirical evidence that human thought processes and cognitive aspects deserve further investigation to improve decision making in software development for effective process management and resource allocation. Keywords: Confirmation bias, Defect prediction, Human factors, Software psychology, work metrics
- Alistair Moffat, Paul Thomas, Falk Scholer "Users Versus Models: What Observation Tells Us About Effectiveness Metrics" CIKM '13: Proceedings of the 22nd ACM International Conference On Conference On Information & Knowledge Management October 2013. (ID#:14-1390) Available at: http://dl.acm.org/citation.cfm?id=2505515.2507665&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://doi.acm.org/10.1145/2505515.2507665 Retrieval system effectiveness can be measured in two quite different ways: by monitoring the behavior of users and gathering data about the ease and accuracy with which they accomplish certain specified information-seeking tasks; or by using numeric effectiveness metrics to score system runs in reference to a set of relevance judgments. In the second approach, the effectiveness metric is chosen in the belief that user task performance, if it were to be measured by the first approach, should be linked to the score provided by the metric. This work explores that link, by analyzing the assumptions and implications of a number of effectiveness metrics, and exploring how these relate to observable user behaviors. Data recorded as part of a user study included user self-assessment of search task difficulty; gaze position; and click activity. Our results show that user behavior is influenced by a blend of many factors, including the extent to which relevant documents are encountered, the stage of the search process, and task difficulty. These insights can be used to guide development of batch effectiveness metrics. Keywords: evaluation, retrieval experiment, system measurement
- Nathaniel Husted, Steven Myers, Abhi Shelat, Paul Grubbs. "GPU And CPU Parallelization Of Honest-But-Curious Secure Two-Party Computation" ACSAC '13 Proceedings of the 29th Annual Computer Security Applications Conference December 2013 (Pages 169-178) . (ID#:14-1391) Available at: http://dl.acm.org/citation.cfm?id=2523649.2523681&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://doi.acm.org/10.1145/2523649.2523681 Recent work demonstrates the feasibility and practical use of secure two-party computation [5, 9, 15, 23]. In this work, we present the first Graphical Processing Unit (GPU)-optimized implementation of an optimized Yao's garbled-circuit protocol for two-party secure computation in the honest-but-curious and 1-bit-leaked malicious models. We implement nearly all of the modern protocol advancements, such as Free-XOR, Pipelining, and OT extension. Our implementation is the first allowing entire circuits to be generated concurrently, and makes use of a modification of the XOR technique so that circuit generation is optimized for implementation on SIMD architectures of GPUs. In our best cases we generate about 75 million gates per second and we exceed the state of the art performance metrics on modern CPU systems by a factor of about 200, and GPU systems by about a factor of 2.3. While many recent works on garbled circuits exploit the embarrassingly parallel nature of many tasks that are part of a secure computation protocol, we show that there are still various forms and levels of parallelization that may yet improve the performance of these protocols. In particular, we highlight that implementations on the SIMD architecture of modern GPUs require significantly different approaches than the general purpose MIMD architecture of multi-core CPUs, which again differ from the needs of parallelizing on compute clusters. Additionally, modifications to the security models for many common protocols have large effects on reasonable parallel architectures for implementation.
- Jeremiah Blocki, Saranga Komanduri, Ariel Procaccia, Or Sheffet. "Optimizing Password Composition Policies" EC '13: Proceedings Of The Fourteenth ACM Conference On Electronic Commerce June 2013 (Pages 105-122). (ID#:14-1392) Available at: http://dl.acm.org/citation.cfm?id=2492002.2482552&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://doi.acm.org/10.1145/2482540.2482552 A password composition policy restricts the space of allowable passwords to eliminate weak passwords that are vulnerable to statistical guessing attacks. Usability studies have demonstrated that existing password composition policies can sometimes result in weaker password distributions; hence a more principled approach is needed. We introduce the first theoretical model for optimizing password composition policies. We study the computational and sample complexity of this problem under different assumptions on the structure of policies and on users' preferences over passwords. Our main positive result is an algorithm that -- with high probability --- constructs almost optimal policies (which are specified as a union of subsets of allowed passwords), and requires only a small number of samples of users' preferred passwords. We complement our theoretical results with simulations using a real-world dataset of 32 million passwords. Keywords: computational complexity, password composition policy, sampling, security
- Georgios Kontaxis, Elias Athanasopoulos, Georgios Portokalidis, Angelos D. Keromytis "SAuth: Protecting User Accounts From Password Database Leaks" CCS '13: Proceedings of the 2013 ACM SIGSAC Conference On Computer & Communications Security November 2013 (Pages 187-198). (ID#:14-1393) Available at: http://dl.acm.org/citation.cfm?id=2508859.2516746&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://doi.acm.org/10.1145/2508859.2516746 Password-based authentication is the dominant form of access control in web services. Unfortunately, it proves to be more and more inadequate every year. Even if users choose long and complex passwords, vulnerabilities in the way they are managed by a service may leak them to an attacker. Recent incidents in popular services such as LinkedIn and Twitter demonstrate the impact that such an event could have. The use of one-way hash functions to mitigate the problem is countered by the evolution of hardware which enables powerful password-cracking platforms. In this paper we propose SAuth, a protocol which employs authentication synergy among different services. Users wishing to access their account on service S will also have to authenticate for their account on service V, which acts as a vouching party. Both services S and V are regular sites visited by the user everyday (e.g., Twitter, Facebook, Gmail). Should an attacker acquire the password for service S he will be unable to log in unless he also compromises the password for service V and possibly more vouching services. SAuth is an extension and not a replacement of existing authentication methods. It operates one layer above without ties to a specific method, thus enabling different services to employ heterogeneous systems. Finally we employ password decoys to protect users that share a password across services. Keywords: Security and privacy, Security services, Authentication, decoys, password leak, synergy
- Majdi Abdellatief, Abu Bakar Md Sultan, Abdul Azim Abdul Ghani, Marzanah A. Jabar . "A Mapping Study To Investigate Component-Based Software System Metrics" Journal of Systems and Software , Volume 86 Issue 3 March 2013 (Pages 587-603) . (ID#:14-1394) Available at: http://dl.acm.org/citation.cfm?id=2430750.2430927&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://dx.doi.org/10.1016/j.jss.2012.10.001 A component-based software system (CBSS) is a software system that is developed by integrating components that have been deployed independently. In the last few years, many researchers have proposed metrics to evaluate CBSS attributes. However, the practical use of these metrics can be difficult. For example, some of the metrics have concepts that either overlap or are not well defined, which could hinder their implementation. The aim of this study is to understand, classify and analyze existing research in component-based metrics, focusing on approaches and elements that are used to evaluate the quality of CBSS and its components from a component consumer's point of view. This paper presents a systematic mapping study of several metrics that were proposed to measure the quality of CBSS and its components. We found 17 proposals that could be applied to evaluate CBSSs, while 14 proposals could be applied to evaluate individual components in isolation. Various elements of the software components that were measured are reviewed and discussed. Only a few of the proposed metrics are soundly defined. The quality assessment of the primary studies detected many limitations and suggested guidelines for possibilities for improving and increasing the acceptance of metrics. However, it remains a challenge to characterize and evaluate a CBSS and its components quantitatively. For this reason, much effort must be made to achieve a better evaluation approach in the future. Keywords: Component-based software system, Software components, Software metrics, Software quality, Systematic mapping study
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.