Visible to the public Trust, Recommendation Systems, and Collaboration - UMD - July 2015Conflict Detection Enabled

Public Audience
Purpose: To highlight project progress. Information is generally at a higher level which is accessible to the interested public. All information contained in the report (regions 1-3) is a Government Deliverable/CDRL.

PI(s): John S. Baras, Jennifer Golbeck
Researchers: Peixin Gao (graduate student), Xiangyang Liu (graduate student)

HARD PROBLEM(S) ADDRESSED

#2 (Policy-Governed Secure Collaboration);
#1 (Scalability and Composability); #5 (Understanding and Accounting for Human Behavior)

ACCOMPLISHMENT HIGHLIGHTS

Our goal is to develop a transformational framework for a science of trust, and its impact on local policies for collaboration, in networked multi-agent systems. The framework will take human behavior into account from the start by treating humans as integrated components of these networks, interacting dynamically with other elements. The new analytical framework will be integrated, and validated, with empirical methods of analyzing experimental data on trust, recommendation, and reputation, from several datasets available to us, in order to capture fundamental trends and patterns of human behavior, including trust and mistrust propagation, confidence in trust, phase transitions in the dynamic graph models involved in the new framework, stability or instability of collaborations. A key challenge addressed is the impact of the dynamics of trust and mistrust (or of positive and negative recommendations) on the capabilities of agents to collaborate and to execute tasks in a distributed manner.

Trust, as a concept, has been developed and used in several settings and in various forms. It has been developed and applied in social and economic networks as well as information and communication networks. An important challenge is the diversity of descriptions and uses of trust that have appeared in prior work. Another challenge is the relative scarcity of quantitative and formal methods for modeling and evaluating trust. Methods for modeling trust have varied from simple empirical models based on statistical experiments, to simple scalar weights, to more sophisticated policy-based methods. Furthermore, there are very few works attempting to link empirical data on trust (in particular data on human behavior) to various formal and quantitative models.          

Our new framework is based on our recently developed foundational model for networked multi-agent systems in which we consider three interacting dynamic graphs on the same underlying set of nodes: a social/agent network, which is relational; an information network, which is also relational; and a communication network that is physical. These graphs are directed and their links and nodes are annotated with dynamically changing “weights” representing trust metrics whose formal definition and mathematical representation can take one of several options, e.g.  weights can be scalars, vectors, or even policies (i.e. rules). Such models, in much simpler mathematical form, have been used in social- and economic-network studies under the name of value directed graphs. The model we are developing is far more sophisticated, and thus much more expressive, basically because we investigate dynamic graphs (or more precisely multi-graphs) in each of the three layers (or levels), as well as dynamic characterizations of trust no matter what the mathematical representation used (i.e. from weights to policies). We will incorporate within such models complex human behavior in various forms.

Within this new framework that we are developing, we are specifically focusing on investigating the following fundamental problems: (a) Theories and principles governing the spreading dynamics of trust and mistrust among members of a network; (b) Design and analysis of recommendation systems, their dynamics and integrity; (c) Development of a framework for understanding the composition of trust across various networks at the different layers of our basic model; (d) Analysis of the effects of trust on collaboration in networked multi-agent systems, using game-theoretic and economic principles. Various practical applications are also pursued to demonstrate the results in various practical settings.

In these investigations we principally use the following analytical methods and appropriate extensions: (i) Multiple partially ordered semirings; (ii) Constrained-coalitional games on dynamic networks; (iii) Embeddings of complex annotated graphs in nonlinear parametric spaces for the development of scalable and fast algorithms (e.g. hyperbolic networks and hyperbolic embeddings); (iv) Sophisticated statistical analysis of experimental data on trust and associated human behavioral patterns.

COMMUNITY INTERACTION

John Baras participated heavily in the NIST organized public working group on Cyber-Physical Systems (CPS) and in particular with the subgroup working on security problems and formulations for CPS