Biblio
The blockchain technology revolution and the use of blockchains in various applications have resulted in many companies and programmers developing and customizing specific fit-for-purpose consensus algorithms. Security and performance are determined by the chosen consensus algorithm; hence, the reliability and security of these algorithms must be assured and tested, which requires an understanding of all the security assumptions that make such algorithms correct and byzantine fault-tolerant.This paper studies the "security ingredients" that enable a given consensus algorithm to achieve safety, liveness, and byzantine fault tolerance (BFT) in both permissioned and permissionless blockchain systems. The key contributions of this paper are the organization of these requirements and a new taxonomy that describes the requirements for security. The CAP Theorem is utilized to explain important tradeoffs between consistency and availability in consensus algorithm design, which are crucial depending on the specific application of a given algorithm. This topic has also been explored previously by De Angelis. However, this paper expands that prior explanation and dilemma of consistency vs. availability and then combines this with Buterin's Trilemma to complete the overall exposition of tradeoffs.
A novel deep neural network is proposed, for accurate and robust crowd counting. Crowd counting is a complex task, as it strongly depends on the deployed camera characteristics and, above all, the scene perspective. Crowd counting is essential in security applications where Internet of Things (IoT) cameras are deployed to help with crowd management tasks. The complexity of a scene varies greatly, and a medium to large scale security system based on IoT cameras must cater for changes in perspective and how people appear from different vantage points. To address this, our deep architecture extracts multi-scale features with a pyramid contextual module to provide long-range contextual information and enlarge the receptive field. Experiments were run on three major crowd counting datasets, to test our proposed method. Results demonstrate our method supersedes the performance of state-of-the-art methods.
In recent years, websites that incorporate user reviews, such as Amazon, IMDB and YELP, have become exceedingly popular. As an important factor affecting users purchasing behavior, review information has been becoming increasingly important, and accordingly, the reliability of review information becomes an important issue. This paper proposes a method to more accurately detect the appearance period of spam reviews and to identify the spam reviews by verifying the consistency of review information among multiple review sites. Evaluation experiments were conducted to show the accuracy of the detection results, and compared the newly proposed method with our previously proposed method.
This paper introduces the first state-based formalization of isolation guarantees. Our approach is premised on a simple observation: applications view storage systems as black-boxes that transition through a series of states, a subset of which are observed by applications. Defining isolation guarantees in terms of these states frees definitions from implementation-specific assumptions. It makes immediately clear what anomalies, if any, applications can expect to observe, thus bridging the gap that exists today between how isolation guarantees are defined and how they are perceived. The clarity that results from definitions based on client-observable states brings forth several benefits. First, it allows us to easily compare the guarantees of distinct, but semantically close, isolation guarantees. We find that several well-known guarantees, previously thought to be distinct, are in fact equivalent, and that many previously incomparable flavors of snapshot isolation can be organized in a clean hierarchy. Second, freeing definitions from implementation-specific artefacts can suggest more efficient implementations of the same isolation guarantee. We show how a client-centric implementation of parallel snapshot isolation can be more resilient to slowdown cascades, a common phenomenon in large-scale datacenters.
How to debug large networks is always a challenging task. Software Defined Network (SDN) offers a centralized con- trol platform where operators can statically verify network policies, instead of checking configuration files device-by-device. While such a static verification is useful, it is still not enough: due to data plane faults, packets may not be forwarded according to control plane policies, resulting in network faults at runtime. To address this issue, we present VeriDP, a tool that can continuously monitor what we call control-data plane consistency, defined as the consistency between control plane policies and data plane forwarding behaviors. We prototype VeriDP with small modifications of both hardware and software SDN switches, and show that it can achieve a verification speed of 3 μs per packet, with a false negative rate as low as 0.1%, for the Stanford backbone and Internet2 topologies. In addition, when verification fails, VeriDP can localize faulty switches with a probability as high as 96% for fat tree topologies.
A key use of software-defined networking is to enable scaleout of network data plane elements. Naively scaling networking elements, however, can cause incorrect behavior. For example, we show that an IDS system which operates correctly as a single network element can erroneously and permanently block hosts when it is replicated.
In this paper, we provide a system, COCONUT, for seamless scale-out of network forwarding elements; that is, an SDN application programmer can program to what functionally appears to be a single forwarding element, but whichmay be replicated behind the scenes. To do this, we identifythe key property for seamless scale out, weak causality,and guarantee it through a practical and scalable implementation of vector clocks in the data plane. We prove that COCONUT enables seamless scale out of networking elements, i.e., the user-perceived behavior of any COCONUT element implemented with a distributed set of concurrent replicas is provably indistinguishable from its singleton implementation. Finally, we build a prototype of COCONUT and experimentally demonstrate its correct behavior. We also show that its abstraction enables a more efficient implementation of seamless scale-out compared to a naive baseline.
Modern applications often operate on data in multiple administrative domains. In this federated setting, participants may not fully trust each other. These distributed applications use transactions as a core mechanism for ensuring reliability and consistency with persistent data. However, the coordination mechanisms needed for transactions can both leak confidential information and allow unauthorized influence. By implementing a simple attack, we show these side channels can be exploited. However, our focus is on preventing such attacks. We explore secure scheduling of atomic, serializable transactions in a federated setting. While we prove that no protocol can guarantee security and liveness in all settings, we establish conditions for sets of transactions that can safely complete under secure scheduling. Based on these conditions, we introduce \textbackslashti\staged commit\, a secure scheduling protocol for federated transactions. This protocol avoids insecure information channels by dividing transactions into distinct stages. We implement a compiler that statically checks code to ensure it meets our conditions, and a system that schedules these transactions using the staged commit protocol. Experiments on this implementation demonstrate that realistic federated transactions can be scheduled securely, atomically, and efficiently.
It is expected that DRAM memory will be augmented, and perhaps eventually replaced, by one of several up-and-coming memory technologies. These are all non-volatile, in that they retain their contents without power. This allows primary memory to be used as a fast disk replacement. It also enables more aggressive programming models that directly leverage persistence of primary memory. However, it is challenging to maintain consistency of memory in such an environment. There is no consensus on the right programming model for doing so, and subtle differences can have large, and sometimes surprising, effects on the implementation and its performance. The existing literature describes multiple programming systems that provide point solutions to the selective persistence for user data structures. Real progress in this area requires a choice of programming model, which we cannot reasonably make without a real understanding of the design space. Point solutions are insufficient. We systematically explore what we consider to be the most promising part of the space, precisely defining semantics and identifying implementation costs. This allows us to be much more explicit and precise about semantic and implementation trade-offs that were usually glossed over in prior work. It also exposes some promising new design alternatives.
As it becomes increasingly common for transaction processing systems to operate on datasets that fit within the main memory of a single machine or a cluster of commodity machines, traditional mechanisms for guaranteeing transaction durability–-which typically involve synchronous log flushes–-incur increasingly unappealing costs to otherwise lightweight transactions. Many applications have turned to periodically checkpointing full database state. However, existing checkpointing methods–-even those which avoid freezing the storage layer–-often come with significant costs to operation throughput, end-to-end latency, and total memory usage. This paper presents Checkpointing Asynchronously using Logical Consistency (CALC), a lightweight, asynchronous technique for capturing database snapshots that does not require a physical point of consistency to create a checkpoint, and avoids conspicuous latency spikes incurred by other database snapshotting schemes. Our experiments show that CALC can capture frequent checkpoints across a variety of transactional workloads with extremely small cost to transactional throughput and low additional memory usage compared to other state-of-the-art checkpointing systems.