Visible to the public Biblio

Filters: Keyword is Logic  [Clear All Filters]
2022-08-12
Jiang, Hongpu, Yuan, Yuyu, Guo, Ting, Zhao, Pengqian.  2021.  Measuring Trust and Automatic Verification in Multi-Agent Systems. 2021 8th International Conference on Dependable Systems and Their Applications (DSA). :271—277.
Due to the shortage of resources and services, agents are often in competition with each other. Excessive competition will lead to a social dilemma. Under the viewpoint of breaking social dilemma, we present a novel trust-based logic framework called Trust Computation Logic (TCL) for measure method to find the best partners to collaborate and automatically verifying trust in Multi-Agent Systems (MASs). TCL starts from defining trust state in Multi-Agent Systems, which is based on contradistinction between behavior in trust behavior library and in observation. In particular, a set of reasoning postulates along with formal proofs were put forward to support our measure process. Moreover, we introduce symbolic model checking algorithms to formally and automatically verify the system. Finally, the trust measure method and reported experimental results were evaluated by using DeepMind’s Sequential Social Dilemma (SSD) multi-agent game-theoretic environments.
2022-04-01
Bichhawat, Abhishek, Fredrikson, Matt, Yang, Jean.  2021.  Automating Audit with Policy Inference. 2021 IEEE 34th Computer Security Foundations Symposium (CSF). :1—16.
The risk posed by high-profile data breaches has raised the stakes for adhering to data access policies for many organizations, but the complexity of both the policies themselves and the applications that must obey them raises significant challenges. To mitigate this risk, fine-grained audit of access to private data has become common practice, but this is a costly, time-consuming, and error-prone process.We propose an approach for automating much of the work required for fine-grained audit of private data access. Starting from the assumption that the auditor does not have an explicit, formal description of the correct policy, but is able to decide whether a given policy fragment is partially correct, our approach gradually infers a policy from audit log entries. When the auditor determines that a proposed policy fragment is appropriate, it is added to the system's mechanized policy, and future log entries to which the fragment applies can be dealt with automatically. We prove that for a general class of attribute-based data policies, this inference process satisfies a monotonicity property which implies that eventually, the mechanized policy will comprise the full set of access rules, and no further manual audit is necessary. Finally, we evaluate this approach using a case study involving synthetic electronic medical records and the HIPAA rule, and show that the inferred mechanized policy quickly converges to the full, stable rule, significantly reducing the amount of effort needed to ensure compliance in a practical setting.
2018-05-09
Zhang, Xin, Si, Xujie, Naik, Mayur.  2017.  Combining the Logical and the Probabilistic in Program Analysis. Proceedings of the 1st ACM SIGPLAN International Workshop on Machine Learning and Programming Languages. :27–34.

Conventional program analyses have made great strides by leveraging logical reasoning. However, they cannot handle uncertain knowledge, and they lack the ability to learn and adapt. This in turn hinders the accuracy, scalability, and usability of program analysis tools in practice. We seek to address these limitations by proposing a methodology and framework for incorporating probabilistic reasoning directly into existing program analyses that are based on logical reasoning. We demonstrate that the combined approach can benefit a number of important applications of program analysis and thereby facilitate more widespread adoption of this technology.