Visible to the public Biblio

Filters: Author is Munindar P. Singh  [Clear All Filters]
2017-01-09
Ricard López Fogués, Pradeep K. Murukannaiah, Jose M. Such, Munindar P. Singh.  2017.  Understanding Sharing Policies in Multiparty Scenarios: Incorporating Context, Preferences, and Arguments into Decision Making. ACM Transactions on Computer-Human Interaction.

Social network services enable users to conveniently share personal information.  Often, the information shared concerns other people, especially other members of the social network service.  In such situations, two or more people can have conflicting privacy preferences; thus, an appropriate sharing policy may not be apparent. We identify such situations as multiuser privacy scenarios. Current approaches propose finding a sharing policy through preference aggregation.  However, studies suggest that users feel more confident in their decisions regarding sharing when they know the reasons behind each other's preferences.  The goals of this paper are (1) understanding how people decide the appropriate sharing policy in multiuser scenarios where arguments are employed, and (2) developing a computational model to predict an appropriate sharing policy for a given scenario. We report on a study that involved a survey of 988 Amazon MTurk users about a variety of multiuser scenarios and the optimal sharing policy for each scenario.  Our evaluation of the participants' responses reveals that contextual factors, user preferences, and arguments influence the optimal sharing policy in a multiuser scenario.  We develop and evaluate an inference model that predicts the optimal sharing policy given the three types of features.  We analyze the predictions of our inference model to uncover potential scenario types that lead to incorrect predictions, and to enhance our understanding of when multiuser scenarios are more or less prone to dispute.

 

To appear

2016-12-27
2016-06-20
Mehdi Mashayekhi, Hongying Du, George F. List, Munindar P. Singh.  2016.  Silk: A Simulation Study of Regulating Open Normative Multiagent Systems. Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI). :1–7.

In a multiagent system, a (social) norm describes what the agents may expect from each other.  Norms promote autonomy (an agent need not comply with a norm) and heterogeneity (a norm describes interactions at a high level independent of implementation details). Researchers have studied norm emergence through social learning where the agents interact repeatedly in a graph structure.

In contrast, we consider norm emergence in an open system, where membership can change, and where no predetermined graph structure exists.  We propose Silk, a mechanism wherein a generator monitors interactions among member agents and recommends norms to help resolve conflicts.  Each member decides on whether to accept or reject a recommended norm.  Upon exiting the system, a member passes its experience along to incoming members of the same type.  Thus, members develop norms in a hybrid manner to resolve conflicts.

We evaluate Silk via simulation in the traffic domain.  Our results show that social norms promoting conflict resolution emerge in both moderate and selfish societies via our hybrid mechanism.

Nirav Ajmeri, Jiaming Jiang, Rada Y. Chirkova, Jon Doyle, Munindar P. Singh.  2016.  Coco: Runtime Reasoning about Conflicting Commitments. Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI). :1–7.

To interact effectively, agents must enter into commitments. What should an agent do when these commitments conflict? We describe Coco, an approach for reasoning about which specific commitments apply to specific parties in light of general types of commitments, specific circumstances, and dominance relations among specific commitments. Coco adapts answer-set programming to identify a maximalsetofnondominatedcommitments. It provides a modeling language and tool geared to support practical applications.

2015-04-07
Munindar P. Singh.  2015.  Norms as a Basis for Governing Sociotechnical Systems: Extended Abstract. Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI). :1–5.

We understand a sociotechnical system as a microsociety in which autonomous parties interact with and about technical objects.  We define governance as the administration of such a system by its participants. We develop an approach for governance based on a computational representation of norms.  Our approach has the benefit of capturing stakeholder needs precisely while yielding adaptive resource allocation in the face of changes both in stakeholder needs and the environment. In current work, we are extending this approach to tackle some challenges in cybersecurity.

Extended abstract appearing in the IJCAI Journal Abstracts Track

Munindar P. Singh.  2015.  Norms as a Basis for Governing Sociotechnical Systems: Extended Abstract. Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI). :1–5.

We understand a sociotechnical system as a microsociety in which autonomous parties interact with and about technical objects.  We define \emph{governance} as the administration of such a system by its participants.  We develop an approach for governance based on a computational representation of norms.  Our approach has the benefit of capturing stakeholder needs precisely while yielding adaptive resource allocation in the face of changes both in stakeholder needs and the environment.  In current work, we are extending this approach to tackle some challenges in cybersecurity.

Extended abstract appearing in the IJCAI Journal Abstracts Track

2015-04-04
Hongying Du, Bennett Y. Narron, Nirav Ajmeri, Emily Berglund, Jon Doyle, Munindar P. Singh.  2015.  ENGMAS – Understanding Sanction under Variable Observability in a Secure Environment. Proceedings of the 2nd International Workshop on Agents and CyberSecurity (ACySE). :1–8.

Norms are a promising basis for governance in secure, collaborative environments---systems in which multiple principals interact. Yet, many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and observability of the sanctioner in a secure, collaborative environment. We introduce ENGMAS (Exploratory Norm-Governed MultiAgent Simulation), a multiagent simulation of students performing research within a university lab setting. ENGMAS enables us to explore the combined effects of sanction (group or individual) with the sanctioner's variable observability on system resilience and liveness. The simulation consists of agents maintaining ``compliance" to enforce security norms while also remaining ``motivated" as researchers. The results show with lower observability, agents tend not to comply with security policies and have to leave the organization eventually. Group sanction gives the agents more motive to comply with security policies and is a cost-effective approach comparing to individual sanction in terms of sanction costs.  

Hongying Du, Bennett Y. Narron, Nirav Ajmeri, Emily Berglund, Jon Doyle, Munindar P. Singh.  2015.  Understanding Sanction under Variable Observability in a Secure, Collaborative Environment. Proceedings of the International Symposium and Bootcamp on the Science of Security (HotSoS). :1–10.

Norms are a promising basis for governance in secure, collaborative environments---systems in which multiple principals interact. Yet, many aspects of norm-governance remain poorly understood, inhibiting adoption in real-life collaborative systems. This work focuses on the combined effects of sanction and the observability of the sanctioner in a secure, collaborative environment.  We present CARLOS, a multiagent simulation of graduate students performing research within a university lab setting, to explore these phenomena. The simulation consists of agents maintaining ``compliance" to enforced security norms while remaining ``motivated" as researchers. We hypothesize that (1) delayed observability of the environment would lead to greater motivation of agents to complete research tasks than immediate observability and (2) sanctioning a group for a violation would lead to greater compliance to security norms than sanctioning an individual. We find that only the latter hypothesis is supported.  Group sanction is an interesting topic for future research regarding a means for norm-governance which yields significant compliance with enforced security policy at a lower cost. Our ultimate contribution is to apply social simulation as a way to explore environmental properties and policies to evaluate key transitions in outcome, as a basis for guiding further and more demanding empirical research.

Munindar P. Singh.  2015.  Cybersecurity as an Application Domain for Multiagent Systems. Proceedings of the 14th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS).

The science of cybersecurity has recently been garnering much attention among researchers and practitioners dissatisfied with the ad hoc nature of much of the existing work on cybersecurity. Cybersecurity offers a great opportunity for multiagent systems research.  We motivate cybersecurity as an application area for multiagent systems with an emphasis on normative multiagent systems. First, we describe ways in which multiagent systems could help advance our understanding of cybersecurity and provide a set of principles that could serve as a foundation for a new science of cybersecurity. Second, we argue how paying close attention to the challenges of cybersecurity could expose the limitations of current research in multiagent systems, especially with respect to dealing with considerations of autonomy and interdependence.

Munindar P. Singh.  2015.  Norms as a Basis for Governing Sociotechnical Systems: Extended Abstract. Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI). :1–5.

We understand a sociotechnical system as a microsociety in which autonomous parties interact with and about technical objects.  We define governance as the administration of such a system by its participants.  We develop an approach for governance based on a computational representation of norms.  Our approach has the benefit of capturing stakeholder needs precisely while yielding adaptive resource allocation in the face of changes both in stakeholder needs and the environment.  In current work, we are extending this approach to tackle some challenges in cybersecurity.

Extended abstract appearing in the IJCAI Journal Abstracts Track