Biblio
Many cybersecurity breaches occur due to users not following security regulations, chief among them regulations pertaining to what might be termed hygiene---including applying software patches to operating systems, updating software applications, and maintaining strong passwords.
We capture cybersecurity expectations on users as norms. We empirically investigate sanctioning mechanisms in promoting compliance with those norms as well as the detrimental effect of sanctions on the ability of users to complete their work. We do so by developing a game that emulates the decision making of workers in a research lab.
We find that relative to group sanctions, individual sanctions are more effective in achieving compliance and less detrimental on the ability of users to complete their work.
Our findings have implications for workforce training in cybersecurity.
Extended abstract
Services today are configured through policies that capture expected behaviors. However, because of subtle and changing stakeholder requirements, producing and maintaining policies is nontrivial. Policy errors are surprisingly common and cause avoidable security vulnerabilities.
We propose Aragorn, an approach that applies formal argumentation to produce policies that balance stakeholder concerns. We demonstrate empirically that, compared to the traditional approach for specifying policies, Aragorn performs (1) better on coverage, correctness, and quality; (2) equally well on learnability and effort÷coverage and difficulty; and (3) slightly worse on time and effort needed. Thus, Aragorn demonstrates the potential for capturing policy rationales as arguments.
To appear
We seek to address the challenge of engineering socially intelligent personal agents that are privacy-aware. We propose Arnor, a method, including a metamodel based on social constructs. Arnor incorporates social norms and goes beyond existing agent-oriented software engineering (AOSE) methods by systematically capturing how a personal agent’s actions influence the social experience it delivers. We conduct two empirical studies to evaluate Arnor. First, via a multiphase developer study, we show that Arnor simplifies application development. Second, via simulation experiments, we show that Arnor provides improved privacy-preserving social experience to end users than personal agents engineered using a traditional AOSE method.
We understand a socio-technical system (STS) as a cyber-physical system in which two or more autonomous parties interact via or about technical elements, including the parties’ resources and actions. As information technology begins to pervade every corner of human life, STSs are becoming ever more common, and the challenge of governing STSs is becoming increasingly important. We advocate a normative basis for governance, wherein norms represent the standards of correct behaviour that each party in an STS expects from others. A major benefit of focussing on norms is that they provide a socially realistic view of interaction among autonomous parties that abstracts low-level implementation details. Overlaid on norms is the notion of a sanction as a negative or positive reaction to potentially any violation of or compliance with an expectation. Although norms have been well studied as regards governance for STSs, sanctions have not. Our understanding and usage of norms is inadequate for the purposes of governance unless we incorporate a comprehensive representation of sanctions.
Secure collaboration requires the collaborating parties to apply the
right policies for their interaction. We adopt a notion of
conditional, directed norms as a way to capture the standards of
correctness for a collaboration. How can we handle conflicting norms?
We describe an approach based on knowledge of what norm dominates what
norm in what situation. Our approach adapts answer-set programming to
compute stable sets of norms with respect to their computed conflicts
and dominance. It assesses agent compliance with respect to those
stable sets. We demonstrate our approach on a healthcare scenario.
To interact effectively, agents must enter into commitments. What should an agent do when these commitments conflict? We describe Coco, an approach for reasoning about which specific commitments apply to specific parties in light of general types of commitments, specific circumstances, and dominance relations among specific commitments. Coco adapts answer-set programming to identify a maximalsetofnondominatedcommitments. It provides a modeling language and tool geared to support practical applications.
Privacy remains a major challenge today partly because it brings together social and technical considerations. Yet, current software engineering focuses only on the technical aspects. In contrast, our approach, Revani, understands privacy from the standpoint of sociotechnical systems (STSs), with particular attention on the social elements of STSs. We specify STSs via a combination of technical mechanisms and social norms founded on accountability.
Revani provides a way to formally represent mechanisms and norms, and applies model checking to verify whether specified mechanisms and norms would satisfy the requirements of the stakeholders. Additionally, Revani provides a set of design patterns and a revision tool to update an STS specification as necessary. We demonstrate the working of Revani on a healthcare emergency use case pertaining to disasters.
Norms are a promising basis for governance in secure, collaborative environments---systems in which multiple principals interact. Yet, many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and observability of the sanctioner in a secure, collaborative environment. We introduce ENGMAS (Exploratory Norm-Governed MultiAgent Simulation), a multiagent simulation of students performing research within a university lab setting. ENGMAS enables us to explore the combined effects of sanction (group or individual) with the sanctioner's variable observability on system resilience and liveness. The simulation consists of agents maintaining ``compliance" to enforce security norms while also remaining ``motivated" as researchers. The results show with lower observability, agents tend not to comply with security policies and have to leave the organization eventually. Group sanction gives the agents more motive to comply with security policies and is a cost-effective approach comparing to individual sanction in terms of sanction costs.
Norms are a promising basis for governance in secure, collaborative environments---systems in which multiple principals interact. Yet, many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and the observability of the sanctioner in a secure, collaborative environment. We present CARLOS, a multiagent simulation of graduate students performing research within a university lab setting, to explore these phenomena. The simulation consists of agents maintaining ``compliance" to enforced security norms while remaining ``motivated" as researchers. We hypothesize that (1) delayed observability of the environment would lead to greater motivation of agents to complete research tasks than immediate observability and (2) sanctioning a group for a violation would lead to greater compliance to security norms than sanctioning an individual. We find that only the latter hypothesis is supported. Group sanction is an interesting topic for future research regarding a means for norm-governance which yields significant compliance with enforced security policy at a lower cost. Our ultimate contribution is to apply social simulation as a way to explore environmental properties and policies to evaluate key transitions in outcome, as a basis for guiding further and more demanding empirical research.