Biblio
Hash message authentication is a fundamental building block of many networking security protocols such as SSL, TLS, FTP, and even HTTPS. The sponge-based SHA-3 hashing algorithm is the most recently developed hashing function as a result of a NIST competition to find a new hashing standard after SHA-1 and SHA-2 were found to have collisions, and thus were considered broken. We used Xilinx High-Level Synthesis to develop an optimized and pipelined version of the post-quantum-secure SHA-3 hash message authentication code (HMAC) which is capable of computing a HMAC every 280 clock-cycles with an overall throughput of 604 Mbps. We cover the general security of sponge functions in both a classical and quantum computing standpoint for hash functions, and offer a general architecture for HMAC computation when sponge functions are used.
Industrial cluster is an important organization form and carrier of development of small and medium-sized enterprises, and information service platform is an important facility of industrial cluster. Improving the credibility of the network platform is conducive to eliminate the adverse effects of distrust and information asymmetry on industrial clusters. The decentralization, transparency, openness, and intangibility of block chain technology make it an inevitable choice for trustworthiness optimization of industrial cluster network platform. This paper first studied on trusted standard of industry cluster network platform and construct a new trusted framework of industry cluster network platform. Then the paper focus on trustworthiness optimization of data layer and application layer of the platform. The purpose of this paper is to build an industrial cluster network platform with data access, information trustworthiness, function availability, high-speed and low consumption, and promote the sustainable and efficient development of industrial cluster.
To date, numerous ways have been created to learn a fusion solution from data. However, a gap exists in terms of understanding the quality of what was learned and how trustworthy the fusion is for future-i.e., new-data. In part, the current paper is driven by the demand for so-called explainable AI (XAI). Herein, we discuss methods for XAI of the Choquet integral (ChI), a parametric nonlinear aggregation function. Specifically, we review existing indices, and we introduce new data-centric XAI tools. These various XAI-ChI methods are explored in the context of fusing a set of heterogeneous deep convolutional neural networks for remote sensing.
Matrix factorization (MF) has been proved to be an effective approach to build a successful recommender system. However, most current MF-based recommenders cannot obtain high prediction accuracy due to the sparseness of user-item matrix. Moreover, these methods suffer from the scalability issues when applying on large-scale real-world tasks. To tackle these issues, in this paper a social regularization method called TrustRSNMF is proposed that incorporates the social trust information of users in nonnegative matrix factorization framework. The proposed method integrates trust statements along with user-item ratings as an additional information source into the recommendation model to deal with the data sparsity and cold-start issues. In order to evaluate the effectiveness of the proposed method, a number of experiments are performed on two real-world datasets. The obtained results demonstrate significant improvements of the proposed method compared to state-of-the-art recommendation methods.
In this paper, we propose to impose a multiscale contextual loss for image style transfer based on Convolutional Neural Networks (CNN). In the traditional optimization framework, a new stylized image is synthesized by constraining the high-level CNN features similar to a content image and the lower-level CNN features similar to a style image, which, however, appears to lost many details of the content image, presenting unpleasing and inconsistent distortions or artifacts. The proposed multiscale contextual loss, named Haar loss, is responsible for preserving the lost details by dint of matching the features derived from the content image and the synthesized image via wavelet transform. It endows the synthesized image with the characteristic to better retain the semantic information of the content image. More specifically, the unpleasant distortions can be effectively alleviated while the style can be well preserved. In the experiments, we show the visually more consistent and simultaneously well-stylized images generated by incorporating the multiscale contextual loss.
The massive integration of Renewable Energy Sources (RES) into power systems is a major challenge but it also provides new opportunities for network operation. For example, with a large amount of RES available at HV subtransmission level, it is possible to exploit them as controlling resources in islanding conditions. Thus, a procedure for off-line evaluation of islanded operation feasibility in the presence of RES is proposed. The method finds which generators and loads remain connected after islanding to balance the island's real power maximizing the amount of supplied load and assuring the network's long-term security. For each possible islanding event, the set of optimal control actions (load/generation shedding) to apply in case of actual islanding, is found. The procedure is formulated as a Mixed Integer Non-Linear Problem (MINLP) and is solved using Genetic Algorithms (GAs). Results, including dynamic simulations, are shown for a representative HV subtransmission grid.
The recently developed deep belief network (DBN) has been shown to be an effective methodology for solving time series forecasting problems. However, the performance of DBN is seriously depended on the reasonable setting of hyperparameters. At present, random search, grid search and Bayesian optimization are the most common methods of hyperparameters optimization. As an alternative, a state-of-the-art derivative-free optimizer-negative correlation search (NCS) is adopted in this paper to decide the sizes of DBN and learning rates during the training processes. A comparative analysis is performed between the proposed method and other popular techniques in the time series forecasting experiment based on two types of time series datasets. Experiment results statistically affirm the efficiency of the proposed model to obtain better prediction results compared with conventional neural network models.
Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.
Mimicking the collaborative behavior of biological swarms, such as bird flocks and ant colonies, Swarm Intelligence algorithms provide efficient solutions for various optimization problems. On the other hand, a computational model of the human brain, spiking neural networks, has been showing great promise in recognition, inference, and learning, due to recent emergence of neuromorphic hardware for high-efficient and low-power computing. Through bridging these two distinct research fields, we propose a novel computing paradigm that implements the swarm intelligence with a population of coupled spiking neural oscillators in basic leaky integrate-and-fire (LIF) model. Our model behaves as a meta-heuristic searching conducted by multiple collaborative agents. In this design, the oscillating neurons serve as agents in the swarm, search for solutions in frequency coding and communicate with each other through spikes. The firing rate of each agent is adaptive to other agents with better solutions and the optimal solution is rendered as the swarm synchronization is reached. We apply the proposed method to the parameter optimization in several test objective functions and demonstrate its effectiveness and efficiency. Our new computing paradigm expands the computational power of coupled spiking neurons in the field of solving optimization problem and brings opportunities for the connection between individual intelligence and swarm intelligence.
Integration of information technologies with the current power infrastructure promises something further than a smart grid: implementation of smart cities. Power efficient cities will be a significant step toward greener cities and a cleaner environment. However, the extensive use of information technologies in smart cities comes at a cost of reduced privacy. In particular, consumers' power profiles will be accessible by third parties seeking information over consumers' personal habits. In this paper, a methodology for enhancing privacy of electricity consumption patterns is proposed and tested. The proposed method exploits digital connectivity and predictive tools offered via smart grids to morph consumption patterns by grouping consumers via an optimization scheme. To that end, load anticipation, correlation and Theil coefficients are utilized synergistically with genetic algorithms to find an optimal assembly of consumers whose aggregated pattern hides individual consumption features. Results highlight the efficiency of the proposed method in enhancing privacy in the environment of smart cities.
While significant progress has been made separately on analytics systems for scalable stochastic gradient descent (SGD) and private SGD, none of the major scalable analytics frameworks have incorporated differentially private SGD. There are two inter-related issues for this disconnect between research and practice: (1) low model accuracy due to added noise to guarantee privacy, and (2) high development and runtime overhead of the private algorithms. This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address both issues in an integrated manner. In contrast to the white-box approach adopted by previous work, we revisit and use the classical technique of output perturbation to devise a novel “bolt-on” approach to private SGD. While our approach trivially addresses (2), it makes (1) even more challenging. We address this challenge by providing a novel analysis of the L2-sensitivity of SGD, which allows, under the same privacy guarantees, better convergence of SGD when only a constant number of passes can be made over the data. We integrate our algorithm, as well as other state-of-the-art differentially private SGD, into Bismarck, a popular scalable SGD-based analytics system on top of an RDBMS. Extensive experiments show that our algorithm can be easily integrated, incurs virtually no overhead, scales well, and most importantly, yields substantially better (up to 4X) test accuracy than the state-of-the-art algorithms on many real datasets.
Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.