Distributed applications that require enforcement of fundamental authorization policies play an increasingly important role in internet and telecommunications infrastructure. Traditionally, controls are imposed before shared resources are accessed to ensure that authorization policies are respected. Recently, there has been great interest in the exploration of accountability mechanisms that rely on after-the-fact verification. In this approach, audit logs record vital systems information and an auditor uses these logs to identify dishonest principals and assign blame when there has been a violation of security policy. Accountability is an important tool to achieve practical security that should be viewed as a first-class design goal of services in federated distributed systems.
The goals of this project are to provide a theoretical basis for the design and analysis of accountability mechanisms and to use the theory to develop language based techniques for statically validating auditors and accountability appliances. This proposal investigates operational (via game-based models) and logical (via game logics) foundations for accountability to provide the theoretical basis for the design and analysis of accountability mechanisms.
The project will bring our understanding of accountability closer to the level of before-the-fact access-control mechanisms, which benefit from well understood operational models and logics and therefore support language-based methods that statically validate implementations against interfaces which specify security guarantees.
Accountability supplements purely technology-based approaches to security with insights derived from the interplay between people and technology. This project aims to develop new models, logics, algorithms, and theories for analyzing and reasoning about accountability-based approaches to trustworthiness.
|