Biblio

Filters: Author is Nirav Ajmeri  [Clear All Filters]
2019-03-20
Shubham Goyal, Nirav Ajmeri, Munindar P. Singh.  2019.  Applying Norms and Sanctions to Promote Cybersecurity Hygiene. Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS). :1–3.

Many cybersecurity breaches occur due to users not following security regulations, chief among them regulations pertaining to what might be termed hygiene---including applying software patches to operating systems, updating software applications, and maintaining strong passwords. 

We capture cybersecurity expectations on users as norms. We empirically investigate sanctioning mechanisms in promoting compliance with those norms as well as the detrimental effect of sanctions on the ability of users to complete their work. We do so by developing a game that emulates the decision making of workers in a research lab. 

We find that relative to group sanctions, individual sanctions are more effective in achieving compliance and less detrimental on the ability of users to complete their work.
Our findings have implications for workforce training in cybersecurity.

Extended abstract

2017-04-10
Nirav Ajmeri, Chung-Wei Hang, Simon D. Parsons, Munindar P. Singh.  2017.  Aragorn: Eliciting and Maintaining Secure Service Policies. IEEE Computer. 50:1–8.

Services today are configured through policies that capture expected behaviors. However, because of subtle and changing stakeholder requirements, producing and maintaining policies is nontrivial. Policy errors are surprisingly common and cause avoidable security vulnerabilities.

We propose Aragorn, an approach that applies formal argumentation to produce policies that balance stakeholder concerns. We demonstrate empirically that, compared to the traditional approach for specifying policies, Aragorn performs (1) better on coverage, correctness, and quality; (2) equally well on learnability and effort÷coverage and difficulty; and (3) slightly worse on time and effort needed. Thus, Aragorn demonstrates the potential for capturing policy rationales as arguments.

To appear

Nirav Ajmeri, Hui Guo, Pradeep K. Murukannaiah, Munindar P. Singh.  2017.  Arnor: Modeling Social Intelligence via Norms to Engineer Privacy-Aware Personal Agents. :1–9.

We seek to address the challenge of engineering socially intelligent personal agents that are privacy-aware. We propose Arnor, a method, including a metamodel based on social constructs. Arnor incorporates social norms and goes beyond existing agent-oriented software engineering (AOSE) methods by systematically capturing how a personal agent’s actions influence the social experience it delivers. We conduct two empirical studies to evaluate Arnor. First, via a multiphase developer study, we show that Arnor simplifies application development. Second, via simulation experiments, we show that Arnor provides improved privacy-preserving social experience to end users than personal agents engineered using a traditional AOSE method.

2016-12-27
Ozgur Kafali, Nirav Ajmeri, Munindar P. Singh.  2017.  Kont: Computing Tradeoffs in Normative Multiagent Systems. 31st Conference on Artificial Intelligence (AAAI).
2017-10-09
Karthik Sheshadari, Nirav Ajmeri, Jessica Staddon.  2017.  No (Privacy) News is Good News: An Analysis of New York Times and Guardian Privacy News from 2010 to 2016. Proceedings of 15th Annual Conference on Privacy, Security and Trust (PST). :1-12.
2016-03-29
Luis G. Nardin, Tina Balke-Visser, Nirav Ajmeri, Anup K. Kalia, Jaime S. Sichman, Munindar P. Singh.  2016.  Classifying Sanctions and Designing a Conceptual Sanctioning Process for Socio-Technical Systems. The Knowledge Engineering Review. 31:1–25.

We understand a socio-technical system (STS) as a cyber-physical system in which two or more autonomous parties interact via or about technical elements, including the parties’ resources and actions. As information technology begins to pervade every corner of human life, STSs are becoming ever more common, and the challenge of governing STSs is becoming increasingly important. We advocate a normative basis for governance, wherein norms represent the standards of correct behaviour that each party in an STS expects from others. A major benefit of focussing on norms is that they provide a socially realistic view of interaction among autonomous parties that abstracts low-level implementation details. Overlaid on norms is the notion of a sanction as a negative or positive reaction to potentially any violation of or compliance with an expectation. Although norms have been well studied as regards governance for STSs, sanctions have not. Our understanding and usage of norms is inadequate for the purposes of governance unless we incorporate a comprehensive representation of sanctions.

Jiaming Jiang, Nirav Ajmeri, Rada Y. Chirkova, Jon Doyle, Munindar P. Singh.  2016.  Expressing and Reasoning about Conflicting Norms in Cybersecurity: Poster. Proceedings of the International Symposium and Bootcamp on the Science of Security (HotSoS). :1–2.

Secure collaboration requires the collaborating parties to apply the
right policies for their interaction.  We adopt a notion of
conditional, directed norms as a way to capture the standards of
correctness for a collaboration.  How can we handle conflicting norms?
We describe an approach based on knowledge of what norm dominates what
norm in what situation.  Our approach adapts answer-set programming to
compute stable sets of norms with respect to their computed conflicts
and dominance.  It assesses agent compliance with respect to those
stable sets.  We demonstrate our approach on a healthcare scenario.

2016-06-20
Nirav Ajmeri, Jiaming Jiang, Rada Y. Chirkova, Jon Doyle, Munindar P. Singh.  2016.  Coco: Runtime Reasoning about Conflicting Commitments. Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI). :1–7.

To interact effectively, agents must enter into commitments. What should an agent do when these commitments conflict? We describe Coco, an approach for reasoning about which specific commitments apply to specific parties in light of general types of commitments, specific circumstances, and dominance relations among specific commitments. Coco adapts answer-set programming to identify a maximalsetofnondominatedcommitments. It provides a modeling language and tool geared to support practical applications.

2016-10-04
Ozgur Kafali, Nirav Ajmeri, Munindar P. Singh.  2016.  Formal Understanding of Tradeoffs among Liveness and Safety Requirements. Proceedings of the 3rd International Workshop on Artificial Intelligence for Requirements Engineering (AIRE). :17-18.
Ozgur Kafali, Nirav Ajmeri, Munindar P. Singh.  2016.  Normative Requirements in Sociotechnical Systems. Proceedings of the 9th International Workshop on Requirements Engineering and Law (RELAW). :259-260.
2016-06-17
Ozgur Kafali, Nirav Ajmeri, Munindar P. Singh.  2016.  Revani: Revising and Verifying Normative Specifications for Privacy. IEEE Intelligent Systems.

Privacy remains a major challenge today partly because it brings together social and technical considerations. Yet, current software engineering focuses only on the technical aspects. In contrast, our approach, Revani, understands privacy from the standpoint of sociotechnical systems (STSs), with particular attention on the social elements of STSs. We specify STSs via a combination of technical mechanisms and social norms founded on accountability.

Revani provides a way to formally represent mechanisms and norms, and applies model checking to verify whether specified mechanisms and norms would satisfy the requirements of the stakeholders. Additionally, Revani provides a set of design patterns and a revision tool to update an STS specification as necessary. We demonstrate the working of Revani on a healthcare emergency use case pertaining to disasters.

2015-04-04
Hongying Du, Bennett Y. Narron, Nirav Ajmeri, Emily Berglund, Jon Doyle, Munindar P. Singh.  2015.  ENGMAS – Understanding Sanction under Variable Observability in a Secure Environment. Proceedings of the 2nd International Workshop on Agents and CyberSecurity (ACySE). :1–8.

Norms are a promising basis for governance in secure, collaborative environments---systems in which multiple principals interact. Yet, many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and observability of the sanctioner in a secure, collaborative environment. We introduce ENGMAS (Exploratory Norm-Governed MultiAgent Simulation), a multiagent simulation of students performing research within a university lab setting. ENGMAS enables us to explore the combined effects of sanction (group or individual) with the sanctioner's variable observability on system resilience and liveness. The simulation consists of agents maintaining ``compliance" to enforce security norms while also remaining ``motivated" as researchers. The results show with lower observability, agents tend not to comply with security policies and have to leave the organization eventually. Group sanction gives the agents more motive to comply with security policies and is a cost-effective approach comparing to individual sanction in terms of sanction costs.  

Hongying Du, Bennett Y. Narron, Nirav Ajmeri, Emily Berglund, Jon Doyle, Munindar P. Singh.  2015.  Understanding Sanction under Variable Observability in a Secure, Collaborative Environment. Proceedings of the International Symposium and Bootcamp on the Science of Security (HotSoS). :1–10.

Norms are a promising basis for governance in secure, collaborative environments---systems in which multiple principals interact. Yet, many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and the observability of the sanctioner in a secure, collaborative environment.  We present CARLOS, a multiagent simulation of graduate students performing research within a university lab setting, to explore these phenomena. The simulation consists of agents maintaining ``compliance" to enforced security norms while remaining ``motivated" as researchers. We hypothesize that (1) delayed observability of the environment would lead to greater motivation of agents to complete research tasks than immediate observability and (2) sanctioning a group for a violation would lead to greater compliance to security norms than sanctioning an individual. We find that only the latter hypothesis is supported.  Group sanction is an interesting topic for future research regarding a means for norm-governance which yields significant compliance with enforced security policy at a lower cost. Our ultimate contribution is to apply social simulation as a way to explore environmental properties and policies to evaluate key transitions in outcome, as a basis for guiding further and more demanding empirical research.