Biblio
Distributed consensus is a prototypical distributed optimization and decision making problem in social, economic and engineering networked systems. In collaborative applications investigating the effects of adversaries is a critical problem. In this paper we investigate distributed consensus problems in the presence of adversaries. We combine key ideas from distributed consensus in computer science on one hand and in control systems on the other. The main idea is to detect Byzantine adversaries in a network of collaborating agents who have as goal reaching consensus, and exclude them from the consensus process and dynamics. We describe a novel trust-aware consensus algorithm that integrates the trust evaluation mechanism into the distributed consensus algorithm and propose various local decision rules based on local evidence. To further enhance the robustness of trust evaluation itself, we also introduce a trust propagation scheme in order to take into account evidences of other nodes in the network. The resulting algorithm is flexible and extensible, and can incorporate more complex designs of decision rules and trust models. To demonstrate the power of our trust-aware algorithm, we provide new theoretical security performance results in terms of miss detection and false alarm rates for regular and general trust graphs. We demonstrate through simulations that the new trust-aware consensus algorithm can effectively detect Byzantine adversaries and can exclude them from consensus iterations even in sparse networks with connectivity less than 2f+1, where f is the number of adversaries.
Proper evaluation of classifier predictive models requires the selection of appropriate metrics to gauge the effectiveness of a model's performance. The Area Under the Receiver Operating Characteristic Curve (AUC) has become the de facto standard metric for evaluating this classifier performance. However, recent studies have suggested that AUC is not necessarily the best metric for all types of datasets, especially those in which there exists a high or severe level of class imbalance. There is a need to assess which specific metrics are most beneficial to evaluate the performance of highly imbalanced big data. In this work, we evaluate the performance of eight machine learning techniques on a severely imbalanced big dataset pertaining to the cyber security domain. We analyze the behavior of six different metrics to determine which provides the best representation of a model's predictive performance. We also evaluate the impact that adjusting the classification threshold has on our metrics. Our results find that the C4.5N decision tree is the optimal learner when evaluating all presented metrics for severely imbalanced Slow HTTP DoS attack data. Based on our results, we propose that the use of AUC alone as a primary metric for evaluating highly imbalanced big data may be ineffective, and the evaluation of metrics such as F-measure and Geometric mean can offer substantial insight into the true performance of a given model.
Atomic multicast is a communication primitive that delivers messages to multiple groups of processes according to some total order, with each group receiving the projection of the total order onto messages addressed to it. To be scalable, atomic multicast needs to be genuine, meaning that only the destination processes of a message should participate in ordering it. In this paper we propose a novel genuine atomic multicast protocol that in the absence of failures takes as low as 3 message delays to deliver a message when no other messages are multicast concurrently to its destination groups, and 5 message delays in the presence of concurrency. This improves the latencies of both the fault-tolerant version of classical Skeen's multicast protocol (6 or 12 message delays, depending on concurrency) and its recent improvement by Coelho et al. (4 or 8 message delays). To achieve such low latencies, we depart from the typical way of guaranteeing fault-tolerance by replicating each group with Paxos. Instead, we weave Paxos and Skeen's protocol together into a single coherent protocol, exploiting opportunities for white-box optimisations. We experimentally demonstrate that the superior theoretical characteristics of our protocol are reflected in practical performance pay-offs.
Aiming at the composite uncertainty characteristics and high-dimensional data stream characteristics of the evaluation index with both ambiguity and randomness, this paper proposes a emergency severity assessment method for cluster supply chain based on cloud fuzzy clustering algorithm. The summary cloud model generation algorithm is created. And the multi-data fusion method is applied to the cloud model processing of the evaluation indexes for high-dimensional data stream with ambiguity and randomness. The synopsis data of the emergency severity assessment indexes are extracted. Based on time attenuation model and sliding window model, the data stream fuzzy clustering algorithm for emergency severity assessment is established. The evaluation results are rationally optimized according to the generalized Euclidean distances of the cluster centers and cluster microcluster weights, and the severity grade of cluster supply chain emergency is dynamically evaluated. The experimental results show that the proposed algorithm improves the clustering accuracy and reduces the operation time, as well as can provide more accurate theoretical support for the early warning decision of cluster supply chain emergency.
From signal processing to emerging deep neural networks, a range of applications exhibit intrinsic error resilience. For such applications, approximate computing opens up new possibilities for energy-efficient computing by producing slightly inaccurate results using greatly simplified hardware. Adopting this approach, a variety of basic arithmetic units, such as adders and multipliers, have been effectively redesigned to generate approximate results for many error-resilient applications.In this work, we propose SECO, an approximate exponential function unit (EFU). Exponentiation is a key operation in many signal processing applications and more importantly in spiking neuron models, but its energy-efficient implementation has been inadequately explored. We also introduce a cross-layer design method for SECO to optimize the energy-accuracy trade-off. At the algorithm level, SECO offers runtime scaling between energy efficiency and accuracy based on approximate Taylor expansion, where the error is minimized by optimizing parameters using discrete gradient descent at design time. At the circuit level, our error analysis method efficiently explores the design space to select the energy-accuracy-optimal approximate multiplier at design time. In tandem, the cross-layer design and runtime optimization method are able to generate energy-efficient and accurate approximate EFU designs that are up to 99.7% accurate at a power consumption of 3.73 pJ per exponential operation. SECO is also evaluated on the adaptive exponential integrate-and-fire neuron model, yielding only 0.002% timing error and 0.067% value error compared to the precise neuron model.
This article presents a consensus based distributed energy management optimization algorithm for an islanded microgrid. With the rapid development of renewable energy and distributed generation (DG) energy management is becoming more and more distributed. To solve this problem a multi-agent system based distributed solution is designed in this work which uses lambda-iteration method to solve optimization problem. Moreover, the algorithm is fully distributed and transmission losses are also considered in the modeling process which enhanced the practicality of proposed work. Simulations are performed for different cases on 8-bus microgrid to show the effectiveness of algorithm. Moreover, a scalability test is performed at the end to further justify the expandability performance of algorithm for more advanced networks.