Biblio
We propose a novel attestation architecture for the Internet of Things (IoT). Our distributed attestation network (DAN) utilizes blockchain technology to store and share device information. We present the design of this new attestation architecture as well as a prototype system chosen to emulate an IoT deployment with a network of Raspberry Pi, Infineon TPMs, and a Hyperledger Fabric blockchain.
Zero Trust Model ensures each node is responsible for the approval of the transaction before it gets committed. The data owners can track their data while it’s shared amongst the various data custodians ensuring data security. The consensus algorithm enables the users to trust the network as malicious nodes fail to get approval from all nodes, thereby causing the transaction to be aborted. The use case chosen to demonstrate the proposed consensus algorithm is the college placement system. The algorithm has been extended to implement a diversified, decentralized, automated placement system, wherein the data owner i.e. the student, maintains an immutable certificate vault and the student’s data has been validated by a verifier network i.e. the academic department and placement department. The data transfer from student to companies is recorded as transactions in the distributed ledger or blockchain allowing the data to be tracked by the student.
Software defined networks (SDNs) represent new centralized network architecture that facilitates the deployment of services, applications and policies from the upper layers, relatively the management and control planes to the lower layers the data plane and the end user layer. SDNs give several advantages in terms of agility and flexibility, especially for mobile operators and for internet service providers. However, the implementation of these types of networks faces several technical challenges and security issues. In this paper we will focus on SDN's security issues and we will propose the implementation of a centralized security layer named AM-SecP. The proposed layer is linked vertically to all SDN layers which ease packets inspections and detecting intrusions. The purpose of this architecture is to stop and to detect malware infections, we do this by denying services and tunneling attacks without encumbering the networks by expensive operations and high calculation cost. The implementation of the proposed framework will be also made to demonstrate his feasibility and robustness.
Big data processing systems are becoming increasingly more present in cloud workloads. Consequently, they are starting to incorporate more sophisticated mechanisms from traditional database and distributed systems. We focus in this work on the use of caching policies, which for big data raise important new challenges. Not only they must respond to new variants of the trade-off between hit rate, response time, and the space consumed by the cache, but they must do so at possibly higher volume and velocity than web and database workloads. Previous caching policies have not been tested experimentally with big data workloads. We address these challenges in this work. We propose the Read Density family of policies, which is a principled approach to quantify the utility of cached objects through a family of utility functions that depend on the frequency of reads of an object. We further design the Approximate Histogram, which is a policy-based technique based on an array of counters. This technique promises to achieve runtime-space efficient computation of the metric required by the cache policy. We evaluate through trace-based simulation the caching policies from the Read Density family, and compare them with over ten state-of-the-art alternatives. We use two workload traces representative for big data processing, collected from commercial Spark and MapReduce deployments. While we achieve comparable performance to the state-of-art with less parameters, meaningful performance improvement for big data workloads remain elusive.
In recent years, the increasing concerns around the centralized cloud web services (e.g. privacy, governance, surveillance, security) have triggered the emergence of new distributed technologies, such as IPFS or the Blockchain. These innovations have tackled technical challenges that were unresolved until their appearance. Existing models of peer-to-peer systems need a revision to cover the spectrum of potential systems that can be now implemented as peer-to-peer systems. This work presents a framework to build these systems. It uses an agent-oriented approach in an open environment where agents have only partial information of the system data. The proposal covers data access, data discovery and data trust in peer-to-peer systems where different actors may interact. Moreover, the framework proposes a distributed architecture for these open systems, and provides guidelines to decide in which cases Blockchain technology may be required, or when other technologies may be sufficient.
As a frequent participant in eSociety, Willeke is often preoccupied with paperwork because there is no easy to use, affordable way to act as a qualified person in the digital world. Confidential interactions take place over insecure channels like e-mail and post. This situation poses risks and costs for service providers, civilians and governments, while goals regarding confidentiality and privacy are not always met. The objective of this paper is to demonstrate an alternative architecture in which identifying persons, exchanging information, authorizing external parties and signing documents will become more user-friendly and secure. As a starting point, each person has their personal data space, provided by a qualified trust service provider that also issues a high level of assurance electronic ID. Three main building blocks are required: (1) secure exchange between the personal data space of each person, (2) coordination functionalities provided by a token based infrastructure, and (3) governance over this infrastructure. Following the design science research approach, we developed prototypes of the building blocks that we will pilot in practice. Policy makers and practitioners that want to enable Willeke to get rid of her paperwork can find guidance throughout this paper and are welcome to join the pilots in the Netherlands.
Formal verification of infinite-state systems, and distributed systems in particular, is a long standing research goal. In the deductive verification approach, the programmer provides inductive invariants and pre/post specifications of procedures, reducing the verification problem to checking validity of logical verification conditions. This check is often performed by automated theorem provers and SMT solvers, substantially increasing productivity in the verification of complex systems. However, the unpredictability of automated provers presents a major hurdle to usability of these tools. This problem is particularly acute in case of provers that handle undecidable logics, for example, first-order logic with quantifiers and theories such as arithmetic. The resulting extreme sensitivity to minor changes has a strong negative impact on the convergence of the overall proof effort.
This paper presents DeDoS, a novel platform for mitigating asymmetric DoS attacks. These attacks are particularly challenging since even attackers with limited resources can exhaust the resources of well-provisioned servers. DeDoS offers a framework to deploy code in a highly modular fashion. If part of the application stack is experiencing a DoS attack, DeDoS can massively replicate only the affected component, potentially across many machines. This allows scaling of the impacted resource separately from the rest of the application stack, so that resources can be precisely added where needed to combat the attack. Our evaluation results show that DeDoS incurs reasonable overheads in normal operations, and that it significantly outperforms standard replication techniques when defending against a range of asymmetric attacks.
Among the various challenges faced by the P2P file sharing systems like BitTorrent, the most common attack on the basic foundation of such systems is: Free-riding. Generally, free-riders are the users in the file sharing network who avoid contributing any resources but tend to consume the resources unethically from the P2P network whereas white-washers are more specific category of free-riders that voluntarily leave the system in a frequent fashion and appearing again and again with different identities to escape from the penal actions imposed by the network. BitTorrent being a collaborative distributed platform requires techniques for discouraging and punishing such user behavior. In this paper, we propose that ``Instead of punishing, we may focus more on rewarding the honest peers''. This approach could be presented as an alternative to other mechanisms of rewarding the peers like tit-for-tat [10], reciprocity based etc., built for the BitTorrent platform. The prime objective of BitTrusty is: providing incentives to the cooperative peers by rewarding in terms of cryptocoins based on blockchain. We have anticipated three ways of achieving the above defined objective. We are further investigating on how to integrate these two technologies of distributed systems viz. P2P file sharing systems and blockchain, and with this new paradigm, interesting research areas can be further developed, both in the field of P2P cryptocurrency networks and also when these networks are combined with other distributed scenarios.
We consider the distributed statistical learning problem over decentralized systems that are prone to adversarial attacks. This setup arises in many practical applications, including Google's Federated Learning. Formally, we focus on a decentralized system that consists of a parameter server and m working machines; each working machine keeps N/m data samples, where N is the total number of samples. In each iteration, up to q of the m working machines suffer Byzantine faults – a faulty machine in the given iteration behaves arbitrarily badly against the system and has complete knowledge of the system. Additionally, the sets of faulty machines may be different across iterations. Our goal is to design robust algorithms such that the system can learn the underlying true parameter, which is of dimension d, despite the interruption of the Byzantine attacks. In this paper, based on the geometric median of means of the gradients, we propose a simple variant of the classical gradient descent method. We show that our method can tolerate q Byzantine failures up to 2(1+$ε$)q łe m for an arbitrarily small but fixed constant $ε$0. The parameter estimate converges in O(łog N) rounds with an estimation error on the order of max $\surd$dq/N, \textasciitilde$\surd$d/N , which is larger than the minimax-optimal error rate $\surd$d/N in the centralized and failure-free setting by at most a factor of $\surd$q . The total computational complexity of our algorithm is of O((Nd/m) log N) at each working machine and O(md + kd log 3 N) at the central server, and the total communication cost is of O(m d log N). We further provide an application of our general results to the linear regression problem. A key challenge arises in the above problem is that Byzantine failures create arbitrary and unspecified dependency among the iterations and the aggregated gradients. To handle this issue in the analysis, we prove that the aggregated gradient, as a function of model parameter, converges uniformly to the true gradient function.
Today's emerging Industrial Internet of Things (IIoT) scenarios are characterized by the exchange of data between services across enterprises. Traditional access and usage control mechanisms are only able to determine if data may be used by a subject, but lack an understanding of how it may be used. The ability to control the way how data is processed is however crucial for enterprises to guarantee (and provide evidence of) compliant processing of critical data, as well as for users who need to control if their private data may be analyzed or linked with additional information - a major concern in IoT applications processing personal information. In this paper, we introduce LUCON, a data-centric security policy framework for distributed systems that considers data flows by controlling how messages may be routed across services and how they are combined and processed. LUCON policies prevent information leaks, bind data usage to obligations, and enforce data flows across services. Policy enforcement is based on a dynamic taint analysis at runtime and an upfront static verification of message routes against policies. We discuss the semantics of these two complementing enforcement models and illustrate how LUCON policies are compiled from a simple policy language into a first-order logic representation. We demonstrate the practical application of LUCON in a real-world IoT middleware and discuss its integration into Apache Camel. Finally, we evaluate the runtime impact of LUCON and discuss performance and scalability aspects.
This paper introduces the first state-based formalization of isolation guarantees. Our approach is premised on a simple observation: applications view storage systems as black-boxes that transition through a series of states, a subset of which are observed by applications. Defining isolation guarantees in terms of these states frees definitions from implementation-specific assumptions. It makes immediately clear what anomalies, if any, applications can expect to observe, thus bridging the gap that exists today between how isolation guarantees are defined and how they are perceived. The clarity that results from definitions based on client-observable states brings forth several benefits. First, it allows us to easily compare the guarantees of distinct, but semantically close, isolation guarantees. We find that several well-known guarantees, previously thought to be distinct, are in fact equivalent, and that many previously incomparable flavors of snapshot isolation can be organized in a clean hierarchy. Second, freeing definitions from implementation-specific artefacts can suggest more efficient implementations of the same isolation guarantee. We show how a client-centric implementation of parallel snapshot isolation can be more resilient to slowdown cascades, a common phenomenon in large-scale datacenters.
We introduce Active Dependency Mapping (ADM), a method for establishing dependency relations among a set of interdependent services. The approach is to artificially degrade network performance to infer which assets on the network support a particular process. Artificial degradation of the network environment could be transparent to users; run continuously it could identify dependencies that are rare or occur only at certain timescales. A useful byproduct of this dependency analysis is a quantitative assessment of the resilience and robustness of the system. This technique is intriguing for hardening both enterprise networks and cyber physical systems. We present a proof-of-concept experiment executed on a real-world set of interrelated software services. We assess the efficacy of the approach, discuss current limitations, and suggest options for future development of ADM.