Biblio

Filters: Keyword is NCSU  [Clear All Filters]
2017-04-11
Christopher Theisen, Brendan Murphy, Kim Herzig, Laurie Williams.  Submitted.  Risk-Based Attack Surface Approximation: How Much Data is Enough? International Conference on Software Engineering (ICSE) Software Engineering in Practice (SEIP) 2017.

Proactive security reviews and test efforts are a necessary component of the software development lifecycle. Resource limitations often preclude reviewing the entire code
base. Making informed decisions on what code to review can improve a team’s ability to find and remove vulnerabilities. Risk-based attack surface approximation (RASA) is a technique that uses crash dump stack traces to predict what code may contain exploitable vulnerabilities. The goal of this research is to help software development teams prioritize security efforts by the efficient development of a risk-based attack surface approximation. We explore the use of RASA using Mozilla Firefox and Microsoft Windows stack traces from crash dumps. We create RASA at the file level for Firefox, in which the 15.8% of the files that were part of the approximation contained 73.6% of the vulnerabilities seen for the product. We also explore the effect of random sampling of crashes on the approximation, as it may be impractical for organizations to store and process every crash received. We find that 10-fold random sampling of crashes at a rate of 10% resulted in 3% less vulnerabilities identified than using the entire set of stack traces for Mozilla Firefox. Sampling crashes in Windows 8.1 at a rate of 40% resulted in insignificant differences in vulnerability and file coverage as compared to a rate of 100%.

2021-12-28
Munindar P. Singh.  2022.  Consent as a Foundation for Responsible Autonomy. Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI). 36
This paper focuses on a dynamic aspect of responsible autonomy, namely, to make intelligent agents be responsible at run time. That is, it considers settings where decision making by agents impinges upon the outcomes perceived by other agents. For an agent to act responsibly, it must accommodate the desires and other attitudes of its users and, through other agents, of their users. The contribution of this paper is twofold. First, it provides a conceptual analysis of consent, its benefits and misuses, and how understanding consent can help achieve responsible autonomy. Second, it outlines challenges for AI (in particular, for agents and multiagent systems) that merit investigation to form as a basis for modeling consent in multiagent systems and applying consent to achieve responsible autonomy.
Blue Sky Track
2022-07-01
Samin Yaseer Mahmud, William Enck.  2022.  Study of Security Weaknesses in Android Payment Service Provider SDKs. Proceedings of the Symposium and Bootcamp on the Science of Security (HotSoS) Poster Session.

Payment Service Providers (PSP) enable application developers to effortlessly integrate complex payment processing code using software development toolkits (SDKs). While providing SDKs reduces the risk of application developers introducing payment vulnerabilities, vulnerabilities in the SDKs themselves can impact thousands of applications. In this work, we propose a static analysis tool for assessing PSP SDKs using OWASP’s MASVS industry standard for mobile application security. A key challenge for the work was reapplying both the MASVS and program analysis tools designed to analyze whole applications to study only a specific SDK. Our preliminary findings show that a number of payment processing libraries fail to meet MASVS security requirements, with evidence of persisting sensitive data insecurely, using outdated cryptography, and improperly configuring TLS. As such, our investigation demonstrates the value of applying security analysis at SDK granularity to prevent widespread deployment of vulnerable code.

2022-09-28
Samin Yaseer Mahmud, K. Virgil English, Seaver Thorn, William Enck, Adam Oest, Muhammad Saad.  2022.  Analysis of Payment Service Provider SDKs in Android. Annual Computer Security Applications Conference (ACSAC).

Payment Service Providers (PSPs) provide software development toolkits (SDKs) for integrating complex payment processing code into applications. Security weaknesses in payment SDKs can impact thousands of applications. In this work, we propose AARDroid for statically assessing payment SDKs against OWASP’s MASVS industry standard for mobile application security. In creating AARDroid, we adapted application-level requirements and program analysis tools for SDK-specific analysis, tailoring dataflow analysis for SDKs using domain-specific ontologies to infer the security semantics of application programming interfaces (APIs). We apply AARDroid to 50 payment SDKs and discover security weaknesses including saving unencrypted credit card information to files, use of insecure cryptographic primitives, insecure input methods for credit card information, and insecure use of WebViews. These results demonstrate the value of applying security analysis at the SDK granularity to prevent the widespread deployment of insecure code.

2020-10-23
Weicheng Wang, Fabrizio Cicala, Syed Rafiul Hussain, Elisa Bertino, Ninghui Li.  2020.  Analyzing the Attack Landscape of Zigbee-Enabled IoT Systems and Reinstating Users' Privacy. 13th ACM Conference on Security and Privacy in Wireless and Mobile Networks. :133–143.

Zigbee network security relies on symmetric cryptography based on a pre-shared secret. In the current Zigbee protocol, the network coordinator creates a network key while establishing a network. The coordinator then shares the network key securely, encrypted under the pre-shared secret, with devices joining the network to ensure the security of future communications among devices through the network key. The pre-shared secret, therefore, needs to be installed in millions or more devices prior to deployment, and thus will be inevitably leaked, enabling attackers to compromise the confidentiality and integrity of the network. To improve the security of Zigbee networks, we propose a new certificate-less Zigbee joining protocol that leverages low-cost public-key primitives. The new protocol has two components. The first is to integrate Elliptic Curve Diffie-Hellman key exchange into the existing association request/response messages, and to use this key both for link-to-link communication and for encryption of the network key to enhance privacy of user devices. The second is to improve the security of the installation code, a new joining method introduced in Zigbee 3.0 for enhanced security, by using public key encryption. We analyze the security of our proposed protocol using the formal verification methods provided by ProVerif, and evaluate the efficiency and effectiveness of our solution with a prototype built with open source software and hardware stack. The new protocol does not introduce extra messages and the overhead is as lows as 3.8% on average for the join procedure.

2019-03-20
Shubham Goyal, Nirav Ajmeri, Munindar P. Singh.  2019.  Applying Norms and Sanctions to Promote Cybersecurity Hygiene. Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS). :1–3.

Many cybersecurity breaches occur due to users not following security regulations, chief among them regulations pertaining to what might be termed hygiene---including applying software patches to operating systems, updating software applications, and maintaining strong passwords. 

We capture cybersecurity expectations on users as norms. We empirically investigate sanctioning mechanisms in promoting compliance with those norms as well as the detrimental effect of sanctions on the ability of users to complete their work. We do so by developing a game that emulates the decision making of workers in a research lab. 

We find that relative to group sanctions, individual sanctions are more effective in achieving compliance and less detrimental on the ability of users to complete their work.
Our findings have implications for workforce training in cybersecurity.

Extended abstract

2019-07-10
Olufogorehan Tunde-Onadele, Jingzhu He, Ting Dai, Xiaohui Gu.  2019.  A Study on Container Vulnerability Detection. IEEE International Conference on Cloud Engineering (IC2E).
2019-10-10
Nuthan Munaiah, Akond Rahman, Justin Pelletier, Laurie Williams, Andrew Meneely.  2019.  Characterizing Attacker Behavior in a Cybersecurity Penetration Testing Competition. 13th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM).

Inculcating an attacker mindset (i.e. learning to think like an attacker) is an essential skill for engineers and administrators to improve the overall security of software. Describing the approach that adversaries use to discover and exploit vulnerabilities to infiltrate software systems can help inform such an attacker mindset. Aims: Our goal is to assist developers and administrators in inculcating an attacker mindset by proposing an approach to codify attacker behavior in cybersecurity penetration testing competition. Method: We use an existing multimodal dataset of events captured during the 2018 National Collegiate Penetration Testing Competition (CPTC'18) to characterize the approach a team of attackers used to discover and exploit vulnerabilities. Results: We collected 44 events to characterize the approach that one of the participating teams took to discover and exploit seven vulnerabilities. We used the MITRE ATT&CK ™ framework to codify the approach in terms of tactics and techniques. Conclusions: We show that characterizing attackers' campaign as a chronological sequence of MITRE ATT&CK ™ tactics and techniques is feasible. We hope that such a characterization can inform the attacker mindset of engineers and administrators in their pursuit of engineering secure software systems.

2019-01-07
2018-07-09
Christopher Theisen, Hyunwoo Sohn, Dawson Tripp, Laurie Williams.  2018.  BP: Profiling Vulnerabilities on the Attack Surface. IEEE SecDev.

Security practitioners use the attack surface of software systems to prioritize areas of systems to test and analyze. To date, approaches for predicting which code artifacts are vulnerable have utilized a binary classification of code as vulnerable or not vulnerable. To better understand the strengths and weaknesses of vulnerability prediction approaches, vulnerability datasets with classification and severity data are needed. The goal of this paper is to help researchers and practitioners make security effort prioritization decisions by evaluating which classifications and severities of vulnerabilities are on an attack surface approximated using crash dump stack traces. In this work, we use crash dump stack traces to approximate the attack surface of Mozilla Firefox. We then generate a dataset of 271 vulnerable files in Firefox, classified using the Common Weakness Enumeration (CWE) system. We use these files as an oracle for the evaluation of the attack surface generated using crash data. In the Firefox vulnerability dataset, 14 different classifications of vulnerabilities appeared at least once. In our study, 85.3%
of vulnerable files were on the attack surface generated using crash data. We found no difference between the severity of vulnerabilities found on the attack surface generated using crash data and vulnerabilities not occurring on the attack surface. Additionally, we discuss lessons learned during the development of this vulnerability dataset.

2017-04-10
Nirav Ajmeri, Chung-Wei Hang, Simon D. Parsons, Munindar P. Singh.  2017.  Aragorn: Eliciting and Maintaining Secure Service Policies. IEEE Computer. 50:1–8.

Services today are configured through policies that capture expected behaviors. However, because of subtle and changing stakeholder requirements, producing and maintaining policies is nontrivial. Policy errors are surprisingly common and cause avoidable security vulnerabilities.

We propose Aragorn, an approach that applies formal argumentation to produce policies that balance stakeholder concerns. We demonstrate empirically that, compared to the traditional approach for specifying policies, Aragorn performs (1) better on coverage, correctness, and quality; (2) equally well on learnability and effort÷coverage and difficulty; and (3) slightly worse on time and effort needed. Thus, Aragorn demonstrates the potential for capturing policy rationales as arguments.

To appear

Nirav Ajmeri, Hui Guo, Pradeep K. Murukannaiah, Munindar P. Singh.  2017.  Arnor: Modeling Social Intelligence via Norms to Engineer Privacy-Aware Personal Agents. :1–9.

We seek to address the challenge of engineering socially intelligent personal agents that are privacy-aware. We propose Arnor, a method, including a metamodel based on social constructs. Arnor incorporates social norms and goes beyond existing agent-oriented software engineering (AOSE) methods by systematically capturing how a personal agent’s actions influence the social experience it delivers. We conduct two empirical studies to evaluate Arnor. First, via a multiphase developer study, we show that Arnor simplifies application development. Second, via simulation experiments, we show that Arnor provides improved privacy-preserving social experience to end users than personal agents engineered using a traditional AOSE method.

2017-01-05
Aiping Xiong, Robert W. Proctor, Weining Yang, Ninghui Li.  2017.  Is Domain Highlighting Actually Helpful in Identifying Phishing Webpages? Human Factors: The Journal of the Human Factors and Ergonomics Society.

Objective: To evaluate the effectiveness of domain highlighting in helping users identify whether webpages are legitimate or spurious.

Background: As a component of the URL, a domain name can be overlooked. Consequently, browsers highlight the domain name to help users identify which website they are visiting. Nevertheless, few studies have assessed the effectiveness of domain highlighting, and the only formal study confounded highlighting with instructions to look at the address bar. 

Method: We conducted two phishing detection experiments. Experiment 1 was run online: Participants judged the legitimacy of webpages in two phases. In phase one, participants were to judge the legitimacy based on any information on the webpage, whereas phase two they were to focus on the address bar. Whether the domain was highlighted was also varied.  Experiment 2 was conducted similarly but with participants in a laboratory setting, which allowed tracking of fixations.

Results: Participants differentiated the legitimate and fraudulent webpages better than chance. There was some benefit of attending to the address bar, but domain highlighting did not provide effective protection against phishing attacks. Analysis of eye-gaze fixation measures was in agreement with the task performance, but heat-map results revealed that participants’ visual attention was attracted by the highlighted domains.

Conclusion: Failure to detect many fraudulent webpages even when the domain was highlighted implies that users lacked knowledge of webpage security cues or how to use those cues.

2017-06-23
Munindar P. Singh, Amit K. Chopra.  2017.  The Internet of Things and Multiagent Systems: Decentralized Intelligence in Distributed Computing. Proceedings of the 37th IEEE International Conference on Distributed Computing Systems (ICDCS). :1738–1747.

Traditionally, distributed computing concentrates on computation understood at the level of information exchange and sets aside human and organizational concerns as largely to be handled in an ad hoc manner.  Increasingly, however, distributed applications involve multiple loci of autonomy.  Research in multiagent systems (MAS) addresses autonomy by drawing on concepts and techniques from artificial intelligence.  However, MAS research generally lacks an adequate understanding of modern distributed computing.

In this Blue Sky paper, we envision decentralized multiagent systems as a way to place decentralized intelligence in distributed computing, specifically, by supporting computation at the level of social meanings.  We motivate our proposals for research in the context of the Internet of Things (IoT), which has become a major thrust in distributed computing.  From the IoT's representative applications, we abstract out the major challenges of relevance to decentralized intelligence.  These include the heterogeneity of IoT components; asynchronous and delay-tolerant communication and decoupled enactment; and multiple stakeholders with subtle requirements for governance, incorporating resource usage, cooperation, and privacy.  The IoT yields high-impact problems that require solutions that go beyond traditional ways of thinking.

We conclude with highlights of some possible research directions in decentralized MAS, including programming models; interaction-oriented software engineering; and what we term enlightened governance.

Blue Sky Thinking Track

Thomas Christopher King, Akın Günay, Amit K. Chopra, Munindar P. Singh.  2017.  Tosca: Operationalizing Commitments Over Information Protocols. Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI). :1–9.

The notion of commitment is widely studied as a high-level abstraction for modeling multiagent interaction.  An important challenge is supporting flexible decentralized enactments of commitment specifications.  In this paper, we combine recent advances on specifying commitments and information protocols.  Specifically, we contribute Tosca, a technique for automatically synthesizing information protocols from commitment specifications. Our main result is that the synthesized protocols support commitment alignment, which is the idea that agents must make compatible inferences about their commitments despite decentralization.

2017-01-09
Ricard López Fogués, Pradeep K. Murukannaiah, Jose M. Such, Munindar P. Singh.  2017.  Understanding Sharing Policies in Multiparty Scenarios: Incorporating Context, Preferences, and Arguments into Decision Making. ACM Transactions on Computer-Human Interaction.

Social network services enable users to conveniently share personal information.  Often, the information shared concerns other people, especially other members of the social network service.  In such situations, two or more people can have conflicting privacy preferences; thus, an appropriate sharing policy may not be apparent. We identify such situations as multiuser privacy scenarios. Current approaches propose finding a sharing policy through preference aggregation.  However, studies suggest that users feel more confident in their decisions regarding sharing when they know the reasons behind each other's preferences.  The goals of this paper are (1) understanding how people decide the appropriate sharing policy in multiuser scenarios where arguments are employed, and (2) developing a computational model to predict an appropriate sharing policy for a given scenario. We report on a study that involved a survey of 988 Amazon MTurk users about a variety of multiuser scenarios and the optimal sharing policy for each scenario.  Our evaluation of the participants' responses reveals that contextual factors, user preferences, and arguments influence the optimal sharing policy in a multiuser scenario.  We develop and evaluate an inference model that predicts the optimal sharing policy given the three types of features.  We analyze the predictions of our inference model to uncover potential scenario types that lead to incorrect predictions, and to enhance our understanding of when multiuser scenarios are more or less prone to dispute.

 

To appear

2017-07-06
Burcham, Morgan, Al-Zyoud, Mahran, Carver, Jeffrey C., Alsaleh, Mohammed, Du, Hongying, Gilani, Fida, Jiang, Jun, Rahman, Akond, Kafalı, Özgür, Al-Shaer, Ehab et al..  2017.  Characterizing Scientific Reporting in Security Literature: An Analysis of ACM CCS and IEEE S&P Papers. Proceedings of the Hot Topics in Science of Security: Symposium and Bootcamp. :13–23.

Scientific advancement is fueled by solid fundamental research, followed by replication, meta-analysis, and theory building. To support such advancement, researchers and government agencies have been working towards a "science of security". As in other sciences, security science requires high-quality fundamental research addressing important problems and reporting approaches that capture the information necessary for replication, meta-analysis, and theory building. The goal of this paper is to aid security researchers in establishing a baseline of the state of scientific reporting in security through an analysis of indicators of scientific research as reported in top security conferences, specifically the 2015 ACM CCS and 2016 IEEE S&P proceedings. To conduct this analysis, we employed a series of rubrics to analyze the completeness of information reported in papers relative to the type of evaluation used (e.g. empirical study, proof, discussion). Our findings indicated some important information is often missing from papers, including explicit documentation of research objectives and the threats to validity. Our findings show a relatively small number of replications reported in the literature. We hope that this initial analysis will serve as a baseline against which we can measure the advancement of the science of security.

2017-04-11
Akond Rahman, Priysha Pradhan, Asif Parthoϕ, Laurie Williams.  2017.  Predicting Android Application Security and Privacy Risk With Static Code Metrics. 4th IEEE/ACM International Conference on Mobile Software Engineering and Systems.

Android applications pose security and privacy risks for end-users. These risks are often quantified by performing dynamic analysis and permission analysis of the Android applications after release. Prediction of security and privacy risks associated with Android applications at early stages of application development, e.g. when the developer (s) are
writing the code of the application, might help Android application developers in releasing applications to end-users that have less security and privacy risk. The goal of this paper
is to aid Android application developers in assessing the security and privacy risk associated with Android applications by using static code metrics as predictors. In our paper, we consider security and privacy risk of Android application as how susceptible the application is to leaking private information of end-users and to releasing vulnerabilities. We investigate how effectively static code metrics that are extracted from the source code of Android applications, can be used to predict security and privacy risk of Android applications. We collected 21 static code metrics of 1,407 Android applications, and use the collected static code metrics to predict security and privacy risk of the applications. As the oracle of security and privacy risk, we used Androrisk, a tool that quantifies the amount of security and privacy risk of an Android application using analysis of Android permissions and dynamic analysis. To accomplish our goal, we used statistical learners such as, radial-based support vector machine (r-SVM). For r-SVM, we observe a precision of 0.83. Findings from our paper suggest that with proper selection of static code metrics, r-SVM can be used effectively to predict security and privacy risk of Android applications

[Anonymous].  2017.  Which Factors Influence Practitioners’ Usage of Build Automation Tools? 3rd International Workshop on Rapid Continuous Software Engineering (RCoSE) 2017.

Even though build automation tools help to reduce errors and rapid releases of software changes, use of build automation tools is not widespread amongst software practitioners. Software practitioners perceive build automation tools as complex, which can hinder the adoption of these tools. How well founded such perception is, can be determined by
systematic exploration of adoption factors that influence usage of build automation tools. The goal of this paper is to aid software practitioners in increasing their usage of build
automation tools by identifying the adoption factors that influence usage of these tools. We conducted a survey to empirically identify the adoption factors that influence usage of
build automation tools. We obtained survey responses from 268 software professionals who work at NestedApps, Red Hat, as well as contribute to open source software. We observe that adoption factors related to complexity do not have the strongest influence on usage of build automation tools. Instead, we observe compatibility-related adoption factors, such as adjustment with existing tools, and adjustment with practitioner’s existing workflow, to have influence on usage of build automation tools with greater importance. Findings from our paper suggest that usage of build automation tools might increase if: build automation tools fit well with practitioners’ existing workflow and tool usage; and usage of build automation tools are made more visible among practitioners’ peers.

2017-10-09
Karthik Sheshadari, Nirav Ajmeri, Jessica Staddon.  2017.  No (Privacy) News is Good News: An Analysis of New York Times and Guardian Privacy News from 2010 to 2016. Proceedings of 15th Annual Conference on Privacy, Security and Trust (PST). :1-12.
2017-01-03
Rui Shu, Xiaohui Gu, William Enck.  2017.  A Study of Security Vulnerabilities on Docker Hub. Proceedings of the ACM Conference on Data and Application Security and Privacy (CODASPY).
2017-03-31
Rui Shu, Xiaohui Gu, William Enck.  2017.  A Study of Security Vulnerabilities on Docker Hub. Proceedings of the ACM Conference on Data and Application Security and Privacy (CODASPY).

Docker containers have recently become a popular approach to provision multiple applications over shared physical hosts in a more lightweight fashion than traditional virtual machines. This popularity has led to the creation of the Docker Hub registry, which distributes a large number of official and community images. In this paper, we study the state of security vulnerabilities in Docker Hub images. We create a scalable Docker image vulnerability analysis (DIVA) framework that automatically discovers, downloads, and analyzes both official and community images on Docker Hub. Using our framework, we have studied 356,218 images and made the following findings: (1) both official and community images contain more than 180 vulnerabilities on average when considering all versions; (2) many images have not been updated for hundreds of days; and (3) vulnerabilities commonly propagate from parent images to child images. These findings demonstrate a strong need for more automated and systematic methods of applying security updates to Docker images and our current Docker image analysis framework provides a good foundation for such automatic security update.

2016-03-29
Luis G. Nardin, Tina Balke-Visser, Nirav Ajmeri, Anup K. Kalia, Jaime S. Sichman, Munindar P. Singh.  2016.  Classifying Sanctions and Designing a Conceptual Sanctioning Process for Socio-Technical Systems. The Knowledge Engineering Review. 31:1–25.

We understand a socio-technical system (STS) as a cyber-physical system in which two or more autonomous parties interact via or about technical elements, including the parties’ resources and actions. As information technology begins to pervade every corner of human life, STSs are becoming ever more common, and the challenge of governing STSs is becoming increasingly important. We advocate a normative basis for governance, wherein norms represent the standards of correct behaviour that each party in an STS expects from others. A major benefit of focussing on norms is that they provide a socially realistic view of interaction among autonomous parties that abstracts low-level implementation details. Overlaid on norms is the notion of a sanction as a negative or positive reaction to potentially any violation of or compliance with an expectation. Although norms have been well studied as regards governance for STSs, sanctions have not. Our understanding and usage of norms is inadequate for the purposes of governance unless we incorporate a comprehensive representation of sanctions.

2016-04-10
2015-12-16
Amit K. Chopra, Munindar P. Singh.  2016.  From Social Machines to Social Protocols: Software Engineering Foundations for Sociotechnical Systems. Proceedings of the 25th International World Wide Web Conference.

The overarching vision of social machines is to facilitate social processes by having computers provide administrative support. We conceive of a social machine as a sociotechnical system (STS): a software-supported system in which autonomous principals such as humans and organizations interact to exchange information and services.  Existing approaches for social machines emphasize the technical aspects and inadequately support the meanings of social processes, leaving them informally realized in human interactions. We posit that a fundamental rethinking is needed to incorporate accountability, essential for addressing the openness of the Web and the autonomy of its principals.

We introduce Interaction-Oriented Software Engineering (IOSE) as a paradigm expressly suited to capturing the social basis of STSs. Motivated by promoting openness and autonomy, IOSE focuses not on implementation but on social protocols, specifying how social relationships, characterizing the accountability of the concerned parties, progress as they interact.  Motivated by providing computational support, IOSE adopts the accountability representation to capture the meaning of a social machine's states and transitions.

We demonstrate IOSE via examples drawn from healthcare.  We reinterpret the classical software engineering (SE) principles for the STS setting and show how IOSE is better suited than traditional software engineering for supporting social processes.  The contribution of this paper is a new paradigm for STSs, evaluated via conceptual analysis.