Biblio

Found 207 results

Filters: Keyword is CMU  [Clear All Filters]
2016-12-06
Hanan Hibshi, Travis Breaux, Maria Riaz, Laurie Williams.  2016.  A grounded analysis of experts’ decision-making during security assessments. Journal of Cybersecurity Advance Access .

Security analysis requires specialized knowledge to align threats and vulnerabilities in information technology. To identify mitigations, analysts need to understand how threats, vulnerabilities, and mitigations are composed together to yield security requirements. Despite abundant guidance in the form of checklists and controls about how to secure systems, evidence suggests that security experts do not apply these checklists. Instead, they rely on their prior knowledge and experience to identify security vulnerabilities. To better understand the different effects of checklists, design analysis, and expertise, we conducted a series of interviews to capture and encode the decisionmaking process of security experts and novices during three security analysis exercises. Participants were asked to analyze three kinds of artifacts: source code, data flow diagrams, and network diagrams, for vulnerabilities, and then to apply a requirements checklist to demonstrate their ability to mitigate vulnerabilities. We framed our study using Situation Awareness, which is a theory about human perception that was used to elicit interviewee responses. The responses were then analyzed using coding theory and grounded analysis. Our results include decision-making patterns that characterize how analysts perceive, comprehend, and project future threats against a system, and how these patterns relate to selecting security mitigations. Based on this analysis, we discovered new theory to measure how security experts and novices apply attack models and how structured and unstructured analysis enables increasing security requirements coverage. We highlight the role of expertise level and requirements composition in affecting security decision-making and we discuss how our method produced new hypotheses about security analysis and decisionmaking.

2017-01-10
Jonathan Aldrich, Alex Potanin.  2016.  Naturally Embedded DSLs. Systems, Programming, Languages and Applications: Software for Humanity (SPLASH) .

Domain-specific languages can be embedded in a variety of ways within a host language. The choice of embedding approach entails significant tradeoffs in the usability of the embedded DSL. We argue embedding DSLs \textit{naturally} within the host language results in the best experience for end users of the DSL. A \textit{naturally embedded DSL} is one that uses natural syntax, static semantics, and dynamic semantics for the DSL, all of which may differ from the host language. Furthermore, it must be possible to use DSLs together naturally - meaning that different DSLs cannot conflict, and the programmer can easily tell which code is written in which language.

2016-12-07
Jaspreet Bhatia, Travis Breaux, Liora Friedberg, Hanan Hibshi, Daniel Smullen.  2016.  Privacy Risk in Cybersecurity Data Sharing. WISCS '16 Proceedings of the 2016 ACM on Workshop on Information Sharing and Collaborative Security.

As information systems become increasingly interdependent, there is an increased need to share cybersecurity data across government agencies and companies, and within and across industrial sectors. This sharing includes threat, vulnerability and incident reporting data, among other data. For cyberattacks that include sociotechnical vectors, such as phishing or watering hole attacks, this increased sharing could expose customer and employee personal data to increased privacy risk. In the US, privacy risk arises when the government voluntarily receives data from companies without meaningful consent from individuals, or without a lawful procedure that protects an individual's right to due process. In this paper, we describe a study to examine the trade-off between the need for potentially sensitive data, which we call incident data usage, and the perceived privacy risk of sharing that data with the government. The study is comprised of two parts: a data usage estimate built from a survey of 76 security professionals with mean eight years' experience; and a privacy risk estimate that measures privacy risk using an ordinal likelihood scale and nominal data types in factorial vignettes. The privacy risk estimate also factors in data purposes with different levels of societal benefit, including terrorism, imminent threat of death, economic harm, and loss of intellectual property. The results show which data types are high-usage, low-risk versus those that are low-usage, high-risk. We discuss the implications of these results and recommend future work to improve privacy when data must be shared despite the increased risk to privacy.

Cyrus Omar, Jonathan Aldrich.  2016.  Programmable semantic fragments: the design and implementation of typy. GPCE 2016 Proceedings of the 2016 ACM SIGPLAN International Conference on Generative Programming: Concepts and Experiences.

This paper introduces typy, a statically typed programming language embedded by reflection into Python. typy features a fragmentary semantics, i.e. it delegates semantic control over each term, drawn from Python's fixed concrete and abstract syntax, to some contextually relevant user-defined semantic fragment. The delegated fragment programmatically 1) typechecks the term (following a bidirectional protocol); and 2) assigns dynamic meaning to the term by computing a translation to Python.

We argue that this design is expressive with examples of fragments that express the static and dynamic semantics of 1) functional records; 2) labeled sums (with nested pattern matching a la ML); 3) a variation on JavaScript's prototypal object system; and 4) typed foreign interfaces to Python and OpenCL. These semantic structures are, or would need to be, defined primitively in conventionally structured languages.

We further argue that this design is compositionally well-behaved. It avoids the expression problem and the problems of grammar composition because the syntax is fixed. Moreover, programs are semantically stable under fragment composition (i.e. defining a new fragment will not change the meaning of existing program components.)

2016-12-08
Supat Rattanasuksun, Tingting Yu, Witawas Srisa-an, Gregg Rothermel.  2016.  RRF: A Race Reproduction Framework for Use in Debugging Process-Level Races. 27th International Symposium on Software Reliability Engineering (ISSRE).

Process-level races are endemic in modern  systems. These races are difficult  to debug  because they are  sensitive to execution   events  such  as  interrupts and scheduling.  Unless  a process interleaving   that can result in the race can be found, it cannot be reproduced  and cannot be corrected. In practice, however,  the number of interleavings  that can occur among processes  in practice  is large,  and the patterns of interleavings can be complex. Thus, approaches for reproducing process-level races  to date are  often ineffective.  In  this paper, we present RRF, a race reproduction  framework that can help software engineers reproduce reported process-level races, enabling  them to potentially  debug these races. RRF performs a hybrid analysis by leveraging  existing  static program analysis tools, dynamic kernel event  reporting tools,  and yield points  to provide  the observability and controllability  needed to reproduce races. We conducted an empirical study to evaluate RRF; our results show that RRF can be effective for reproducing races.

2017-01-09
Alireza Sadeghi, Hamid Bagheri, Joshua Garcia, Sam Malek.  2016.  A Taxonomy and Qualitative Comparison of Program Analysis Techniques for Security Assessment of Android Software. IEEE TRANSACTIONS ON SOFTWARE ENGINEERING. 99

In parallel with the meteoric rise of mobile software, we are witnessing an alarming escalation in the number and sophistication of the security threats targeted at mobile platforms, particularly Android, as the dominant platform. While existing research has made significant progress towards detection and mitigation of Android security, gaps and challenges remain. This paper contributes a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area. We have carefully followed the systematic literature review process, and analyzed the results of more than 300 research papers, resulting in the most comprehensive and elaborate investigation of the literature in this area of research. The systematic analysis of the research literature has revealed patterns, trends, and gaps in

2016-12-08
Lichao Sun, Zhiqiang Li, Qiben Yan, Witawas Srisa-an, Yu Pan.  2016.  SigPID: Significant Permission Identification for Android Malware Detection. 11th International Conference on Malicious and Unwanted Software (MALCON 2016).

A recent report indicates that a newly developed mali- cious app for Android is introduced every 11 seconds.  To combat this alarming rate of malware creation,  we need a scalable malware detection approach that is effective and efficient. In this paper, we introduce SIGPID, a malware detection system based on permission  analysis to cope with the rapid increase in the number of Android malware. In- stead of analyzing all 135 Android permissions, our ap- proach applies 3-level pruning by mining the permission data to identify only significant permissions that can be ef- fective in distinguishing benign and malicious apps. SIG- PID then utilizes classification algorithms to classify differ- ent families of malware and benign apps. Our evaluation finds that only 22 out of 135 permissions are significant. We then compare the performance of our approach, using only

22 permissions, against a baseline approach that analyzes all permissions. The results indicate that when Support Vec- tor Machine (SVM) is used as the classifier, we can achieve over 90% of precision, recall, accuracy, and F-measure, which  are about the same as those produced by the base- line approach while incurring the analysis times that are 4 to 32 times smaller that those of using all 135 permissions. When we compare the detection effectiveness of SIGPID to those of other approaches, SIGPID can detect 93.62% of malware in the data set, and 91.4% unknown malware.

2016-12-06
Bradley Schmerl, Jeffrey Gennari, Alireza Sadeghi, Hamid Bagheri, Sam Malek, Javier Camara, David Garlan.  2016.  Architecture Modeling and Analysis of Security in Android Systems. 10th European Conference on Software Architecture (ECSA 2016).

Software architecture modeling is important for analyzing system quality attributes, particularly security. However, such analyses often assume that the architecture is completely known in advance. In many modern domains, especially those that use plugin-based frameworks, it is not possible to have such a complete model because the software system continuously changes. The Android mobile operating system is one such framework, where users can install and uninstall apps at run time. We need ways to model and analyze such architectures that strike a balance between supporting the dynamism of the underlying platforms and enabling analysis, particularly throughout a system’s lifetime. In this paper, we describe a formal architecture style that captures the modifiable architectures of Android systems, and that supports security analysis as a system evolves. We illustrate the use of the style with two security analyses: a predicatebased approach defined over architectural structure that can detect some common security vulnerabilities, and inter-app permission leakage determined by model checking. We also show how the evolving architecture of an Android device can be obtained by analysis of the apps on a device, and provide some performance evaluation that indicates that the architecture can be amenable for use throughout the system’s lifetime. 

2017-01-09
Jafar Al-Kofahi, Tien Nguyen, Christian Kästner.  2016.  Escaping AutoHell: a vision for automated analysis and migration of autotools build systems. RELENG 2016 Proceedings of the 4th International Workshop on Release Engineering.

GNU Autotools is a widely used build tool in the open source community. As open source projects grow more complex, maintaining their build systems becomes more challenging, due to the lack of tool support. In this paper, we propose a platform to build support tools for GNU Autotools build systems. The platform provides an abstraction of the build system to be used in different analysis techniques.

2016-12-08
Christopher Bogart, Christian Kästner, James Herbsleb, Ferdian Thung.  2016.  How to break an API: cost negotiation and community values in three software ecosystems. FSE 2016 Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering.

Change introduces conflict into software ecosystems: breaking changes may ripple through the ecosystem and trigger rework for users of a package, but often developers can invest additional effort or accept opportunity costs to alleviate or delay downstream costs. We performed a multiple case study of three software ecosystems with different tooling and philosophies toward change, Eclipse, R/CRAN, and Node.js/npm, to understand how developers make decisions about change and change-related costs and what practices, tooling, and policies are used. We found that all three ecosystems differ substantially in their practices and expectations toward change and that those differences can be explained largely by different community values in each ecosystem. Our results illustrate that there is a large design space in how to build an ecosystem, its policies and its supporting infrastructure; and there is value in making community values and accepted tradeoffs explicit and transparent in order to resolve conflicts and negotiate change-related costs

2016-12-07
Mitra Bokaei Hosseini, Sudarshan Wadkar, Travis Breaux, Jianwei Niu.  2016.  Lexical Similarity of Information Type Hypernyms, Meronyms and Synonyms in Privacy Policies. Association for the Advancement of Artificial Intelligence.

Privacy policies are used to communicate company data practices to consumers and must be accurate and comprehensive. Each policy author is free to use their own nomenclature when describing data practices, which leads to different ways in which similar information types are described across policies. A formal ontology can help policy authors, users and regulators consistently check how data practice descriptions relate to other interpretations of information types. In this paper, we describe an empirical method for manually constructing an information type ontology from privacy policies. The method consists of seven heuristics that explain how to infer hypernym, meronym and synonym relationships from information type phrases, which we discovered using grounded analysis of five privacy policies. The method was evaluated on 50 mobile privacy policies which produced an ontology consisting of 355 unique information type names. Based on the manual results, we describe an automated technique consisting of 14 reusable semantic rules to extract hypernymy, meronymy, and synonymy relations from information type phrases. The technique was evaluated on the manually constructed ontology to yield .95 precision and .51 recall.

2016-12-06
Hamid Bagheri, Sam Malek.  2016.  Titanium: Efficient Analysis of Evolving Alloy Specifications. FSE 2016: ACM SIGSOFT International Symposium on the Foundations of Software.

The Alloy specification language, and the corresponding Alloy Analyzer, have received much attention in the last two decades with applications in many areas of software engineering. Increasingly, formal analyses enabled by Alloy are desired for use in an on-line mode, where the specifications are automatically kept in sync with the running, possibly changing, software system. However, given Alloy Analyzer’s reliance on computationally expensive SAT solvers, an important challenge is the time it takes for such analyses to execute at runtime. The fact that in an on-line mode, the analyses are often repeated on slightly revised versions of a given specification, presents us with an opportunity to tackle this challenge. We present Titanium, an extension of Alloy for formal analysis of evolving specifications. By leveraging the results from previous analyses, Titanium narrows the state space of the revised specification, thereby greatly reducing the required computational effort. We describe the semantic basis of Titanium in terms of models specified in relational logic. We show how the approach can be realized atop an existing relational logic model finder. Our experimental results show Titanium achieves a significant speed-up over Alloy Analyzer when applied to the analysis of evolving specifications.

2016-12-08
Hanan Hibshi, Travis Breaux, Christian Wagner.  2016.  Improving Security Requirements Adequacy An Interval Type 2 Fuzzy Logic Security Assessment System. 2016 IEEE Symposium Series on Computational Intelligence .

Organizations rely on security experts to improve the security of their systems. These professionals use background knowledge and experience to align known threats and vulnerabilities before selecting mitigation options. The substantial depth of expertise in any one area (e.g., databases, networks, operating systems) precludes the possibility that an expert would have complete knowledge about all threats and vulnerabilities. To begin addressing this problem of distributed knowledge, we investigate the challenge of developing a security requirements rule base that mimics human expert reasoning to enable new decision-support systems. In this paper, we show how to collect relevant information from cyber security experts to enable the generation of: (1) interval type-2 fuzzy sets that capture intra- and inter-expert uncertainty around vulnerability levels; and (2) fuzzy logic rules underpinning the decision-making process within the requirements analysis. The proposed method relies on comparative ratings of security requirements in the context of concrete vignettes, providing a novel, interdisciplinary approach to knowledge generation for fuzzy logic systems. The proposed approach is tested by evaluating 52 scenarios with 13 experts to compare their assessments to those of the fuzzy logic decision support system. The initial results show that the system provides reliable assessments to the security analysts, in particular, generating more conservative assessments in 19% of the test scenarios compared to the experts’ ratings. 

2016-04-25
Junjie Qian, Witawas Srisa-an, Hong Jiang, Sharad Seth, Du Li, Pan Yi.  2016.  Exploiting FIFO Scheduler to Improve Parallel Garbage Collection Performance.. VEE '16 12th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments.

Recent studies have found that parallel garbage collection performs worse with more CPUs and more collector threads. As part of this work, we further investigate this enomenon and find that poor scalability is worst in highly scalable Java applications. Our investigation to find the causes clearly reveals that efficient multi-threading in an application can prolong the average object lifespan, which results in less effective garbage collection. We also find that prolonging lifespan is the direct result of Linux's Completely Fair Scheduler due to its round-robin like behavior that can increase the heap contention between the application threads. Instead, if we use pseudo first-in-first-out to schedule application threads in large multicore systems, the garbage collection scalability is significantly improved while the time spent in garbage collection is reduced by as much as 21%. The average execution time of the 24 Java applications used in our study is also reduced by 11%. Based on this observation, we propose two approaches to optimally select scheduling policies based on application scalability profile. Our first approach uses the profile information from one execution to tune the subsequent executions. Our second approach dynamically collects profile information and performs policy selection during execution.

Bradley Schmerl, Jeffrey Gennari, Javier Camara, David Garlan.  2016.  Raindroid - A System for Run-time Mitigation of Android Intent Vulnerabilities. HotSos '16 Proceedings of the Symposium and Bootcamp on the Science of Security.

Modern frameworks are required to be extendable as well as secure. However, these two qualities are often at odds. In this poster we describe an approach that uses a combination of static analysis and run-time management, based on software architecture models, that can improve security while maintaining framework extendability. We implement a prototype of the approach for the Android platform. Static analysis identifies the architecture and communication patterns among the collection of apps on an Android device and which communications might be vulnerable to attack. Run-time mechanisms monitor these potentially vulnerable communication patterns, and adapt the system to either deny them, request explicit approval from the user, or allow them.

Marwan Abi-Antoun, Ebrahim Khalaj, Radu Vanciu, Ahmad Moghimi.  2016.  Abstract Runtime Structure Reasoning about Security. HotSos '16 Proceedings of the Symposium and Bootcamp on the Science of Security.

We propose an interactive approach where analysts reason about the security of a system using an abstraction of its runtime structure, as opposed to looking at the code. They interactively refine a hierarchical object graph, set security properties on abstract objects or edges, query the graph, and investigate the results by studying highlighted objects or edges or tracing to the code. Behind the scenes, an inference analysis and an extraction analysis maintain the soundness of the graph with respect to the code.

Hemank Lamba, Thomas J. Glazier, Bradley Schmerl, Javier Camara, David Garlan, Jurgen Pfeffer.  2016.  A Model-based Approach to Anomaly Detection in Software Architectures. Symposium and Bootcamp on the Science of Security (HotSoS).

In an organization, the interactions users have with software leave patterns or traces of the parts of the systems accessed. These interactions can be associated with the underlying software architecture. The first step in detecting problems like insider threat is to detect those traces that are anomalous. Here, we propose a method to find anomalous users leveraging these interaction traces, categorized by user roles. We propose a model based approach to cluster user sequences and find outliers. We show that the approach works on a simulation of a large scale system based on and Amazon Web application style.

Momin Malik, Jurgen Pfeffer, Gabriel Ferreira, Christian Kästner.  2016.  Visualizing the variational callgraph of the Linux Kernel: An approach for reasoning about dependencies. HotSos '16 Proceedings of the Symposium and Bootcamp on the Science of Security.

Software developers use #ifdef statements to support code configurability, allowing software product diversification. But because functions can be in many executions paths that depend on complex combinations of configuration options, the introduction of an #ifdef for a given purpose (such as adding a new feature to a program) can enable unintended function calls, which can be a source of vulnerabilities. Part of the difficulty lies in maintaining mental models of all dependencies. We propose analytic visualizations of thevariational callgraph to capture dependencies across configurations and create visualizations to demonstrate how it would help developers visually reason through the implications of diversification, for example through visually doing change impact analysis.

Eric Yuan, Sam Malek.  2016.  Mining Software Component Interactions to Detect Security Threats at the Architectural Level. 13th Working IEEE/IFIP Conference on Software Architecture (WICSA 2016).

Conventional security mechanisms at network, host, and source code levels are no longer sufficient in detecting and responding to increasingly dynamic and sophisticated cyber threats today. Detecting anomalous behavior at the architectural level can help better explain the intent of the threat and strengthen overall system security posture. To that end, we present a framework that mines software component interactions from system execution history and applies a detection algorithm to identify anomalous behavior. The framework uses unsupervised learning at runtime, can perform fast anomaly detection “on the fly”, and can quickly adapt to system load fluctuations and user behavior shifts. Our evaluation of the approach against a real Emergency Deployment System has demonstrated very promising results, showing the framework can effectively detect covert attacks, including insider threats, that may be easily missed by traditional intrusion detection methods. 

2016-12-07
Kathleen Carley.  2015.  Crisis Mapping: Big Data from a Dynamic Network Analytic Perspective.

Big data holds the promise of enabling analysts to predict, mitigate and respond rapidly to natural disasters and human crises. Rapid response is critical for saving lives and mitigating damage.  Rapid response requires understanding the socio-cultural terrain; i.e., understanding who needs what or can provide what, where, how, why and when. Today, new technologies are forging a path making use of sensor and social media data to understand the socio-cultural terrain and provide faster response during crises.  A review of this area, the state of the art, and known challenges are discussed.  The basic argument is that this technology is still in its infancy.  Critical scientific challenges, social challenges, and legal challenges will need to be addressed before the promise of big data for Crisis Mapping is fulfilled.

2016-12-05
Arbob Ahmad, Robert Harper.  2015.  An Epistemic Formulation of Information Flow Analysis.

Most  accounts of information flow security in pro- gramming  languages emphasize non-interference  to characterize security: in a secure program,   changes to high-security  inputs do not alter the values  of low-security  outputs. The definition of non-interference   is incompatible  with declassification, which allows some low-security  outputs to be influenced by high-security inputs. We  propose  an alternative  account of information flow based on an epistemic logic of computational effects. Rather  than view a program  as a function from inputs to outputs, we instead embrace the principle that information flow security is concerned with the effects  a program has  on its execution  environment. These effects are modelled using a substructural  epistemic logic that tracks the flow of knowledge  gained by principals  and communication  channels during execution. Confidentiality   is expressed  by proving  necessary conditions  for a principal to know a sensitive fact at the end of an execution. In the simplest case  the necessary  condition   is  falsehood,   which means  that a principal cannot  know a secret  as  a result of a well-typed execution  of a program. In  the presence  of declassification  a necessary condition  for disclosure   is  the existence  of a proof of authorization in a formal authorization  logic, expressing that sensitive data is disclosed only when explicitly  authorized.  Rather than taken  as the primary result, the classical non-interference property  arises in the proof of adequacy of the epistemic theory of disclosure, ensuring that it accurately models program  behavior. It  is  suggested  that an epistemic  account  of information  flow security is both more natural  and more expressive than classical accounts based only on non-interference.

2016-12-06
Limin Jia, Shayak Sen, Deepak Garg, Anupam Datta.  2015.  System M: A Program Logic for Code Sandboxing and Identification.

Security-sensitive applications that execute untrusted code often check the code’s integrity by comparing its syntax to a known good value or sandbox the code to contain its effects. System M is a new program logic for reasoning about such security-sensitive applications. System M extends Hoare Type Theory (HTT) to trace safety properties and, additionally, contains two new reasoning principles. First, its type system internalizes logical equality, facilitating reasoning about applications that check code integrity. Second, a con- finement rule assigns an effect type to a computation based solely on knowledge of the computation’s sandbox. We prove the soundness of System M relative to a step-indexed trace-based semantic model. We illustrate both new reasoning principles of System M by verifying the main integrity property of the design of Memoir, a previously proposed trusted computing system for ensuring state continuity of isolated security-sensitive applications. 

Ju-Sung Lee, Jurgen Pfeffer.  2015.  Estimating Centrality Statistics for Large Scale and Sampled Networks: Some Approaches and Complications. 2015 48th Hawaii International Conference on System Sciences.

The study of large, “big data” networks is becoming increasingly common and relevant to our understanding of human systems. Many of the studied networks are drawn from social media and other web-based sources. As such, in-depth analysis of these dynamic structures e.g. in the context of cybersecurity, remains especially challenging. Due to the time and resources incurred in computing network measures for large networks, it is practical to approximate these whenever possible. We present some approximation techniques exploiting any tractable relationship between the measures and network characteristics such as size and density. We find there exist distinct functional relationships between network statistics of complex “slow” measures and “fast” measures, such as the linkage between betweenness centrality and network density. We also track how these relationships scale with network size. Specifically, we explore the effi- cacy of both linear modeling (i.e., correlations and least squares regression) and non-linear modeling in estimating the network measures of interest. We find that sparse, but not severely sparse, networks which admit sufficient entropy incur the most variance in the network statistics and, hence, more error in the estimation. We review our approaches with three prominent network topologies: random (aka Erdos-R ˝ enyi), Watts- ´ Strogatz small-world, and scale-free networks. Finally, we assess how well the estimation approaches perform for sub-sampled networks.

2016-12-05
Claus Hunsen, Bo Zhang, Janet Siegmund, Christian Kästner, Olaf Lebenich, Martin Becker, Sven Apel.  2015.  Preprocessor-based variability in open-source and industrial software systems: An empirical study. Empirical Software Engineering. 20:1-34.

Almost every sufficiently complex software system today is configurable. Conditional compilation is a simple variability-implementation mechanism that is widely used in open-source projects and industry. Especially, the C preprocessor (CPP) is very popular in practice, but it is also gaining (again) interest in academia. Although there have been several attempts to understand and improve CPP, there is a lack of understanding of how it is used in open-source and industrial systems and whether different usage patterns have emerged. The background is that much research on configurable systems and product lines concentrates on open-source systems, simply because they are available for study in the first place. This leads to the potentially problematic situation that it is unclear whether the results obtained from these studies are transferable to industrial systems. We aim at lowering this gap by comparing the use of CPP in open-source projects and industry—especially from the embedded-systems domain—based on a substantial set of subject systems and well-known variability metrics, including size, scattering, and tangling metrics. A key result of our empirical study is that, regarding almost all aspects we studied, the analyzed open-source systems and the considered embedded systems from industry are similar regarding most metrics, including systems that have been developed in industry and made open source at some point. So, our study indicates that, regarding CPP as variability-implementation mechanism, insights, methods, and tools developed based on studies of open-source systems are transferable to industrial systems—at least, with respect to the metrics we considered.

2016-12-06
Ju-Sung Lee, Jurgen Pfeffer.  2015.  Robustness of Network Metrics in the Context of Digital Communication Data. HICSS '15 Proceedings of the 2015 48th Hawaii International Conference on System Sciences.

Social media data and other web-based network data are large and dynamic rendering the identification of structural changes in such systems a hard problem. Typically, online data is constantly streaming and results in data that is incomplete thus necessitating the need to understand the robustness of network metrics on partial or sampled network data. In this paper, we examine the effects of sampling on key network centrality metrics using two empirical communication datasets. Correlations between network metrics of original and sampled nodes offer a measure of sampling accuracy. The relationship between sampling and accuracy is convergent and amenable to nonlinear analysis. Naturally, larger edge samples induce sampled graphs that are more representative of the original graph. However, this effect is attenuated when larger sets of nodes are recovered in the samples. Also, we find that the graph structure plays a prominent role in sampling accuracy. Centralized graphs, in which fewer nodes enjoy higher centrality scores, offer more representative samples.