Biblio
Companies like Netflix increasingly use the cloud to deploy their business processes. Those processes often involve partnerships with other companies, and can be modeled as workflows where the owner of the data at risk interacts with contractors to realize a sequence of tasks on the data to be secured.In practice, access control is an essential building block to deploy these secured workflows. This component is generally managed by administrators using high-level policies meant to represent the requirements and restrictions put on the workflow. Handling access control with a high-level scheme comes with the benefit of separating the problem of specification, i.e. defining the desired behavior of the system, from the problem of implementation, i.e. enforcing this desired behavior. However, translating such high-level policies into a deployed implementation can be error-prone.Even though semi-automatic and automatic tools have been proposed to assist this translation, policy verification remains highly challenging in practice. In this paper, our aim is to define and propose structures assisting the checking and correction of potential errors introduced on the ground due to a faulty translation or corrupted deployments. In particular, we investigate structures with formal foundations able to naturally model policies. Metagraphs, a generalized graph theoretic structure, fulfill those requirements: their usage enables to compare high-level policies to their implementation. In practice, we consider Rego, a language used by companies like Netflix and Plex for their release process, as a valuable representative of most common policy languages. We propose a suite of tools transforming and checking policies as metagraphs, and use them in a global framework to show how policy verification can be achieved with such structures. Finally, we evaluate the performance of our verification method.
Performance analysis of newly designed solutions is essential for efficient Internet of Things and Wireless Sensor Network (WSN) deployments. Simulation and experimental evaluation practices are vital steps for the development process of protocols and applications for wireless technologies. Nowadays, the new solutions can be tested at a very large scale over both simulators and testbeds. In this paper, we first discuss the importance of repeatable experimental setups for reproducible performance evaluation results. To this aim, we present FIT IoT-LAB, a very large-scale and experimental testbed, i.e., consists of 2769 low-power wireless devices and 127 mobile robots. We then demonstrate through a number of experiments conducted on FIT IoT-LAB testbed, how to conduct meaningful experiments under real-world conditions. Finally, we discuss to what extent results obtained from experiments could be considered as scientific, i.e., reproducible by the community.