Biblio

Found 303 results

Filters: Keyword is Optimization  [Clear All Filters]
2019-02-21
Gao, Y..  2018.  An Improved Hybrid Group Intelligent Algorithm Based on Artificial Bee Colony and Particle Swarm Optimization. 2018 International Conference on Virtual Reality and Intelligent Systems (ICVRIS). :160–163.
Aiming at the disadvantage of poor convergence performance of PSO and artificial swarm algorithm, an improved hybrid algorithm is proposed to overcome the shortcomings of complex optimization problems. Through the test of four standard function by hybrid algorithm and compared the result with standard particle swarm optimization (PSO) algorithm and Artificial Bee Colony (ABC) algorithm, the convergence rate and convergence precision of the hybrid algorithm are both superior to those of the standard particle swarm algorithm and Artificial Bee Colony algorithm, presenting a better optimal performance.
2020-06-12
Latif, M. Kamran, Jacinto, H S., Daoud, Luka, Rafla, Nader.  2018.  Optimization of a Quantum-Secure Sponge-Based Hash Message Authentication Protocol. 2018 IEEE 61st International Midwest Symposium on Circuits and Systems (MWSCAS). :984—987.

Hash message authentication is a fundamental building block of many networking security protocols such as SSL, TLS, FTP, and even HTTPS. The sponge-based SHA-3 hashing algorithm is the most recently developed hashing function as a result of a NIST competition to find a new hashing standard after SHA-1 and SHA-2 were found to have collisions, and thus were considered broken. We used Xilinx High-Level Synthesis to develop an optimized and pipelined version of the post-quantum-secure SHA-3 hash message authentication code (HMAC) which is capable of computing a HMAC every 280 clock-cycles with an overall throughput of 604 Mbps. We cover the general security of sponge functions in both a classical and quantum computing standpoint for hash functions, and offer a general architecture for HMAC computation when sponge functions are used.

2019-03-11
Li, Z., Xie, X., Ma, X., Guan, Z..  2018.  Trustworthiness Optimization of Industrial Cluster Network Platform Based on Blockchain. 2018 8th International Conference on Logistics, Informatics and Service Sciences (LISS). :1–6.

Industrial cluster is an important organization form and carrier of development of small and medium-sized enterprises, and information service platform is an important facility of industrial cluster. Improving the credibility of the network platform is conducive to eliminate the adverse effects of distrust and information asymmetry on industrial clusters. The decentralization, transparency, openness, and intangibility of block chain technology make it an inevitable choice for trustworthiness optimization of industrial cluster network platform. This paper first studied on trusted standard of industry cluster network platform and construct a new trusted framework of industry cluster network platform. Then the paper focus on trustworthiness optimization of data layer and application layer of the platform. The purpose of this paper is to build an industrial cluster network platform with data access, information trustworthiness, function availability, high-speed and low consumption, and promote the sustainable and efficient development of industrial cluster.

2020-11-02
Shen, Hanji, Long, Chun, Li, Jun, Wan, Wei, Song, Xiaofan.  2018.  A Method for Performance Optimization of Virtual Network I/O Based on DPDK-SRIOV*. 2018 IEEE International Conference on Information and Automation (ICIA). :1550—1554.
Network security testing devices play important roles in Cyber security. Most of the current network security testing devices are based on proprietary hardware, however, the virtual network security tester needs high network I/O throughput performance. Therefore, the solution of the problem, which provides high-performance network I/O in the virtual scene will be explained in this paper. The method we proposed for virtualized network I/O performance optimization on a general hardware platform is able to achieve the I/O throughput performance of the proprietary hardware. The Single Root I/O Virtualization (SRIOV) of the physical network card is divided into a plurality of virtual network function of VF, furthermore, it can be added to different VF and VM. Extensive experiment illustrated that the virtualization and the physical network card sharing based on hardware are realized, and they can be used by Data Plane Development Kit (DPDK) and SRIOV technology. Consequently, the test instrument applications in virtual machines achieves the rate of 10Gps and meet the I/O requirement.
2019-11-25
Guo, Tao, Yeung, Raymond w..  2018.  The Explicit Coding Rate Region of Symmetric Multilevel Diversity Coding. 2018 Information Theory and Applications Workshop (ITA). :1–9.
It is well known that superposition coding, namely separately encoding the independent sources, is optimal for symmetric multilevel diversity coding (SMDC) (Yeung-Zhang 1999). However, the characterization of the coding rate region therein involves uncountably many linear inequalities and the constant term (i.e., the lower bound) in each inequality is given in terms of the solution of a linear optimization problem. Thus this implicit characterization of the coding rate region does not enable the determination of the achievability of a given rate tuple. In this paper, we first obtain closed-form expressions of these uncountably many inequalities. Then we identify a finite subset of inequalities that is sufficient for characterizing the coding rate region. This gives an explicit characterization of the coding rate region. We further show by the symmetry of the problem that only a much smaller subset of this finite set of inequalities needs to be verified in determining the achievability of a given rate tuple. Yet, the cardinality of this smaller set grows at least exponentially fast with L.
2019-12-30
Shirasaki, Yusuke, Takyu, Osamu, Fujii, Takeo, Ohtsuki, Tomoaki, Sasamori, Fumihito, Handa, Shiro.  2018.  Consideration of security for PLNC with untrusted relay in game theoretic perspective. 2018 IEEE Radio and Wireless Symposium (RWS). :109–112.
A physical layer network coding (PLNC) is a highly efficient scheme for exchanging information between two nodes. Since the relay receives the interfered signal between two signals sent by two nodes, it hardly decodes any information from received signal. Therefore, the secure wireless communication link to the untrusted relay is constructed. The two nodes optimize the transmit power control for maximizing the secure capacity but these depend on the channel state information informed by the relay station. Therefore, the untrusted relay disguises the informed CSI for exploiting the information from two nodes. This paper constructs the game of two optimizations between the legitimate two nodes and the untrusted relay for clarifying the security of PLNC with untrusted relay.
2018-12-10
Murray, B., Islam, M. A., Pinar, A. J., Havens, T. C., Anderson, D. T., Scott, G..  2018.  Explainable AI for Understanding Decisions and Data-Driven Optimization of the Choquet Integral. 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). :1–8.

To date, numerous ways have been created to learn a fusion solution from data. However, a gap exists in terms of understanding the quality of what was learned and how trustworthy the fusion is for future-i.e., new-data. In part, the current paper is driven by the demand for so-called explainable AI (XAI). Herein, we discuss methods for XAI of the Choquet integral (ChI), a parametric nonlinear aggregation function. Specifically, we review existing indices, and we introduce new data-centric XAI tools. These various XAI-ChI methods are explored in the context of fusing a set of heterogeneous deep convolutional neural networks for remote sensing.

2020-10-05
Parvina, Hashem, Moradi, Parham, Esmaeilib, Shahrokh, Jalilic, Mahdi.  2018.  An Efficient Recommender System by Integrating Non-Negative Matrix Factorization With Trust and Distrust Relationships. 2018 IEEE Data Science Workshop (DSW). :135—139.

Matrix factorization (MF) has been proved to be an effective approach to build a successful recommender system. However, most current MF-based recommenders cannot obtain high prediction accuracy due to the sparseness of user-item matrix. Moreover, these methods suffer from the scalability issues when applying on large-scale real-world tasks. To tackle these issues, in this paper a social regularization method called TrustRSNMF is proposed that incorporates the social trust information of users in nonnegative matrix factorization framework. The proposed method integrates trust statements along with user-item ratings as an additional information source into the recommendation model to deal with the data sparsity and cold-start issues. In order to evaluate the effectiveness of the proposed method, a number of experiments are performed on two real-world datasets. The obtained results demonstrate significant improvements of the proposed method compared to state-of-the-art recommendation methods.

2018-11-19
Li, P., Zhao, L., Xu, D., Lu, D..  2018.  Incorporating Multiscale Contextual Loss for Image Style Transfer. 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC). :241–245.

In this paper, we propose to impose a multiscale contextual loss for image style transfer based on Convolutional Neural Networks (CNN). In the traditional optimization framework, a new stylized image is synthesized by constraining the high-level CNN features similar to a content image and the lower-level CNN features similar to a style image, which, however, appears to lost many details of the content image, presenting unpleasing and inconsistent distortions or artifacts. The proposed multiscale contextual loss, named Haar loss, is responsible for preserving the lost details by dint of matching the features derived from the content image and the synthesized image via wavelet transform. It endows the synthesized image with the characteristic to better retain the semantic information of the content image. More specifically, the unpleasant distortions can be effectively alleviated while the style can be well preserved. In the experiments, we show the visually more consistent and simultaneously well-stylized images generated by incorporating the multiscale contextual loss.

2020-07-16
Bovo, Cristian, Ilea, Valentin, Rolandi, Claudio.  2018.  A Security-Constrained Islanding Feasibility Optimization Model in the Presence of Renewable Energy Sources. 2018 IEEE International Conference on Environment and Electrical Engineering and 2018 IEEE Industrial and Commercial Power Systems Europe (EEEIC / I CPS Europe). :1—6.

The massive integration of Renewable Energy Sources (RES) into power systems is a major challenge but it also provides new opportunities for network operation. For example, with a large amount of RES available at HV subtransmission level, it is possible to exploit them as controlling resources in islanding conditions. Thus, a procedure for off-line evaluation of islanded operation feasibility in the presence of RES is proposed. The method finds which generators and loads remain connected after islanding to balance the island's real power maximizing the amount of supplied load and assuring the network's long-term security. For each possible islanding event, the set of optimal control actions (load/generation shedding) to apply in case of actual islanding, is found. The procedure is formulated as a Mixed Integer Non-Linear Problem (MINLP) and is solved using Genetic Algorithms (GAs). Results, including dynamic simulations, are shown for a representative HV subtransmission grid.

2019-02-21
Xie, S., Wang, G..  2018.  Optimization of parallel turnings using particle swarm intelligence. 2018 Tenth International Conference on Advanced Computational Intelligence (ICACI). :230–234.
Machining process parameters optimization is of concern in machining fields considering machining cost factor. In order to solve the optimization problem of machining process parameters in parallel turning operations, which aims to reduce the machining cost, two PSO-based optimization approaches are proposed in this paper. According to the divide-and-conquer idea, the problem is divided into some similar sub-problems. A particle swarm optimization then is derived to conquer each sub-problem to find the optimal results. Simulations show that, comparing to other optimization approaches proposed previously, the proposed two PSO-based approaches can get optimal machining parameters to reduce both the machining cost (UC) and the computation time.
2019-12-16
Lin, Jerry Chun-Wei, Zhang, Yuyu, Chen, Chun-Hao, Wu, Jimmy Ming-Tai, Chen, Chien-Ming, Hong, Tzung-Pei.  2018.  A Multiple Objective PSO-Based Approach for Data Sanitization. 2018 Conference on Technologies and Applications of Artificial Intelligence (TAAI). :148–151.
In this paper, a multi-objective particle swarm optimization (MOPSO)-based framework is presented to find the multiple solutions rather than a single one. The presented grid-based algorithm is used to assign the probability of the non-dominated solution for next iteration. Based on the designed algorithm, it is unnecessary to pre-define the weights of the side effects for evaluation but the non-dominated solutions can be discovered as an alternative way for data sanitization. Extensive experiments are carried on two datasets to show that the designed grid-based algorithm achieves good performance than the traditional single-objective evolution algorithms.
2019-03-25
Ali-Tolppa, J., Kocsis, S., Schultz, B., Bodrog, L., Kajo, M..  2018.  SELF-HEALING AND RESILIENCE IN FUTURE 5G COGNITIVE AUTONOMOUS NETWORKS. 2018 ITU Kaleidoscope: Machine Learning for a 5G Future (ITU K). :1–8.
In the Self-Organizing Networks (SON) concept, self-healing functions are used to detect, diagnose and correct degraded states in the managed network functions or other resources. Such methods are increasingly important in future network deployments, since ultra-high reliability is one of the key requirements for the future 5G mobile networks, e.g. in critical machine-type communication. In this paper, we discuss the considerations for improving the resiliency of future cognitive autonomous mobile networks. In particular, we present an automated anomaly detection and diagnosis function for SON self-healing based on multi-dimensional statistical methods, case-based reasoning and active learning techniques. Insights from both the human expert and sophisticated machine learning methods are combined in an iterative way. Additionally, we present how a more holistic view on mobile network self-healing can improve its performance.
2019-03-06
Lin, Y., Liu, H., Xie, G., Zhang, Y..  2018.  Time Series Forecasting by Evolving Deep Belief Network with Negative Correlation Search. 2018 Chinese Automation Congress (CAC). :3839-3843.

The recently developed deep belief network (DBN) has been shown to be an effective methodology for solving time series forecasting problems. However, the performance of DBN is seriously depended on the reasonable setting of hyperparameters. At present, random search, grid search and Bayesian optimization are the most common methods of hyperparameters optimization. As an alternative, a state-of-the-art derivative-free optimizer-negative correlation search (NCS) is adopted in this paper to decide the sizes of DBN and learning rates during the training processes. A comparative analysis is performed between the proposed method and other popular techniques in the time series forecasting experiment based on two types of time series datasets. Experiment results statistically affirm the efficiency of the proposed model to obtain better prediction results compared with conventional neural network models.

2019-01-21
Feng, S., Xiong, Z., Niyato, D., Wang, P., Leshem, A..  2018.  Evolving Risk Management Against Advanced Persistent Threats in Fog Computing. 2018 IEEE 7th International Conference on Cloud Networking (CloudNet). :1–6.
With the capability of support mobile computing demand with small delay, fog computing has gained tremendous popularity. Nevertheless, its highly virtualized environment is vulnerable to cyber attacks such as emerging Advanced Persistent Threats attack. In this paper, we propose a novel approach of cyber risk management for the fog computing platform. Particularly, we adopt the cyber-insurance as a tool for neutralizing cyber risks from fog computing platform. We consider a fog computing platform containing a group of fog nodes. The platform is composed of three main entities, i.e., the fog computing provider, attacker, and cyber-insurer. The fog computing provider dynamically optimizes the allocation of its defense computing resources to improve the security of the fog computing platform. Meanwhile, the attacker dynamically adjusts the allocation of its attack resources to improve the probability of successful attack. Additionally, to prevent from the potential loss due to attacks, the provider also makes a dynamic decision on the purchases ratio of cyber-insurance from the cyber-insurer for each fog node. Thereafter, the cyber-insurer accordingly determines the premium of cyber-insurance for each fog node. In our formulated dynamic Stackelberg game, the attacker and provider act as the followers, and the cyber-insurer acts as the leader. In the lower level, we formulate an evolutionary subgame to analyze the provider's defense and cyber-insurance subscription strategies as well as the attacker's attack strategy. In the upper level, the cyber-insurer optimizes its premium determination strategy, taking into account the evolutionary equilibrium at the lower-level evolutionary subgame. We analytically prove that the evolutionary equilibrium is unique and stable. Moreover, we provide a series of insightful analytical and numerical results on the equilibrium of the dynamic Stackelberg game.
2020-09-28
Zhang, Xueru, Khalili, Mohammad Mahdi, Liu, Mingyan.  2018.  Recycled ADMM: Improve Privacy and Accuracy with Less Computation in Distributed Algorithms. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :959–965.
Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems. In distributed settings, each node performs computation with its local data and the local results are exchanged among neighboring nodes in an iterative fashion. During this iterative process the leakage of data privacy arises and can accumulate significantly over many iterations, making it difficult to balance the privacy-utility tradeoff. In this study we propose Recycled ADMM (R-ADMM), where a linear approximation is applied to every even iteration, its solution directly calculated using only results from the previous, odd iteration. It turns out that under such a scheme, half of the updates incur no privacy loss and require much less computation compared to the conventional ADMM. We obtain a sufficient condition for the convergence of R-ADMM and provide the privacy analysis based on objective perturbation.
2020-07-20
Guelton, Serge, Guinet, Adrien, Brunet, Pierrick, Martinez, Juan Manuel, Dagnat, Fabien, Szlifierski, Nicolas.  2018.  [Research Paper] Combining Obfuscation and Optimizations in the Real World. 2018 IEEE 18th International Working Conference on Source Code Analysis and Manipulation (SCAM). :24–33.
Code obfuscation is the de facto standard to protect intellectual property when delivering code in an unmanaged environment. It relies on additive layers of code tangling techniques, white-box encryption calls and platform-specific or tool-specific countermeasures to make it harder for a reverse engineer to access critical pieces of data or to understand core algorithms. The literature provides plenty of different obfuscation techniques that can be used at compile time to transform data or control flow in order to provide some kind of protection against different reverse engineering scenarii. Scheduling code transformations to optimize a given metric is known as the pass scheduling problem, a problem known to be NP-hard, but solved in a practical way using hard-coded sequences that are generally satisfactory. Adding code obfuscation to the problem introduces two new dimensions. First, as a code obfuscator needs to find a balance between obfuscation and performance, pass scheduling becomes a multi-criteria optimization problem. Second, obfuscation passes transform their inputs in unconventional ways, which means some pass combinations may not be desirable or even valid. This paper highlights several issues met when blindly chaining different kind of obfuscation and optimization passes, emphasizing the need of a formal model to combine them. It proposes a non-intrusive formalism to leverage on sequential pass management techniques. The model is validated on real-world scenarii gathered during the development of an industrial-strength obfuscator on top of the LLVM compiler infrastructure.
2020-10-05
Rafati, Jacob, DeGuchy, Omar, Marcia, Roummel F..  2018.  Trust-Region Minimization Algorithm for Training Responses (TRMinATR): The Rise of Machine Learning Techniques. 2018 26th European Signal Processing Conference (EUSIPCO). :2015—2019.

Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.

2018-09-05
Wang, J., Shi, D., Li, Y., Chen, J., Duan, X..  2017.  Realistic measurement protection schemes against false data injection attacks on state estimators. 2017 IEEE Power Energy Society General Meeting. :1–5.
False data injection attacks (FDIA) on state estimators are a kind of imminent cyber-physical security issue. Fortunately, it has been proved that if a set of measurements is strategically selected and protected, no FDIA will remain undetectable. In this paper, the metric Return on Investment (ROI) is introduced to evaluate the overall returns of the alternative measurement protection schemes (MPS). By setting maximum total ROI as the optimization objective, the previously ignored cost-benefit issue is taken into account to derive a realistic MPS for power utilities. The optimization problem is transformed into the Steiner tree problem in graph theory, where a tree pruning based algorithm is used to reduce the computational complexity and find a quasi-optimal solution with acceptable approximations. The correctness and efficiency of the algorithm are verified by case studies.
2017-12-20
Lu, W., Jiang, Y., Yin, C., Tao, X., Lai, P..  2017.  Security beamforming algorithms in multibeam satellite systems. 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). :1272–1277.
This paper investigates the physical layer security in a multibeam satellite communication system, where each legitimate user is surrounded by one eavesdropper. First of all, an optimization problem is formulated to maximize the sum of achievable secrecy rate, while satisfying the on-board satellite transmit power constraint. Then, two transmit beamforming(BF) schemes, namely, the zero-forcing (ZF) and the signal-to-leakage-and-noise ratio (SLNR) BF algorithms are proposed to obtain the BF weight vectors as well as power allocation coefficients. Finally, simulation results are provided to verify the validity of the two proposed methods and demonstrate that the SLNR BF algorithm outperforms the ZF BF algorithm.
2018-11-19
Yang, M., Wang, A., Sun, G., Liang, S., Zhang, J., Wang, F..  2017.  Signal Distribution Optimization for Cabin Visible Light Communications by Using Weighted Search Bat Algorithm. 2017 3rd IEEE International Conference on Computer and Communications (ICCC). :1025–1030.
With increasing demand for travelling, high-quality network service is important to people in vehicle cabins. Visible light communication (VLC) system is more appropriate than wireless local area network considering the security, communication speed, and narrow shape of the cabin. However, VLC exhibits technical limitations, such as uneven distribution of optical signals. In this regard, we propose a novel weight search bat algorithm (WSBA) to calculate a set of optimal power adjustment factors to reduce fluctuation in signal distributions. Simulation results show that the fairness of signal distribution in the cabin optimized by WSBA is better than that of the non-optimized signal distribution. Moreover, the coverage rate of WSBA is higher than that of genetic algorithm and particle swarm optimization.
2017-12-20
Fang, Y., Dickerson, S. J..  2017.  Achieving Swarm Intelligence with Spiking Neural Oscillators. 2017 IEEE International Conference on Rebooting Computing (ICRC). :1–4.

Mimicking the collaborative behavior of biological swarms, such as bird flocks and ant colonies, Swarm Intelligence algorithms provide efficient solutions for various optimization problems. On the other hand, a computational model of the human brain, spiking neural networks, has been showing great promise in recognition, inference, and learning, due to recent emergence of neuromorphic hardware for high-efficient and low-power computing. Through bridging these two distinct research fields, we propose a novel computing paradigm that implements the swarm intelligence with a population of coupled spiking neural oscillators in basic leaky integrate-and-fire (LIF) model. Our model behaves as a meta-heuristic searching conducted by multiple collaborative agents. In this design, the oscillating neurons serve as agents in the swarm, search for solutions in frequency coding and communicate with each other through spikes. The firing rate of each agent is adaptive to other agents with better solutions and the optimal solution is rendered as the swarm synchronization is reached. We apply the proposed method to the parameter optimization in several test objective functions and demonstrate its effectiveness and efficiency. Our new computing paradigm expands the computational power of coupled spiking neurons in the field of solving optimization problem and brings opportunities for the connection between individual intelligence and swarm intelligence.

2018-05-30
Alamaniotis, M., Tsoukalas, L. H., Bourbakis, N..  2017.  Anticipatory Driven Nodal Electricity Load Morphing in Smart Cities Enhancing Consumption Privacy. 2017 IEEE Manchester PowerTech. :1–6.

Integration of information technologies with the current power infrastructure promises something further than a smart grid: implementation of smart cities. Power efficient cities will be a significant step toward greener cities and a cleaner environment. However, the extensive use of information technologies in smart cities comes at a cost of reduced privacy. In particular, consumers' power profiles will be accessible by third parties seeking information over consumers' personal habits. In this paper, a methodology for enhancing privacy of electricity consumption patterns is proposed and tested. The proposed method exploits digital connectivity and predictive tools offered via smart grids to morph consumption patterns by grouping consumers via an optimization scheme. To that end, load anticipation, correlation and Theil coefficients are utilized synergistically with genetic algorithms to find an optimal assembly of consumers whose aggregated pattern hides individual consumption features. Results highlight the efficiency of the proposed method in enhancing privacy in the environment of smart cities.

2018-06-07
Wu, Xi, Li, Fengan, Kumar, Arun, Chaudhuri, Kamalika, Jha, Somesh, Naughton, Jeffrey.  2017.  Bolt-on Differential Privacy for Scalable Stochastic Gradient Descent-based Analytics. Proceedings of the 2017 ACM International Conference on Management of Data. :1307–1322.

While significant progress has been made separately on analytics systems for scalable stochastic gradient descent (SGD) and private SGD, none of the major scalable analytics frameworks have incorporated differentially private SGD. There are two inter-related issues for this disconnect between research and practice: (1) low model accuracy due to added noise to guarantee privacy, and (2) high development and runtime overhead of the private algorithms. This paper takes a first step to remedy this disconnect and proposes a private SGD algorithm to address both issues in an integrated manner. In contrast to the white-box approach adopted by previous work, we revisit and use the classical technique of output perturbation to devise a novel “bolt-on” approach to private SGD. While our approach trivially addresses (2), it makes (1) even more challenging. We address this challenge by providing a novel analysis of the L2-sensitivity of SGD, which allows, under the same privacy guarantees, better convergence of SGD when only a constant number of passes can be made over the data. We integrate our algorithm, as well as other state-of-the-art differentially private SGD, into Bismarck, a popular scalable SGD-based analytics system on top of an RDBMS. Extensive experiments show that our algorithm can be easily integrated, incurs virtually no overhead, scales well, and most importantly, yields substantially better (up to 4X) test accuracy than the state-of-the-art algorithms on many real datasets.

2018-05-02
Rjoub, G., Bentahar, J..  2017.  Cloud Task Scheduling Based on Swarm Intelligence and Machine Learning. 2017 IEEE 5th International Conference on Future Internet of Things and Cloud (FiCloud). :272–279.

Cloud computing is the expansion of parallel computing, distributed computing. The technology of cloud computing becomes more and more widely used, and one of the fundamental issues in this cloud environment is related to task scheduling. However, scheduling in Cloud environments represents a difficult issue since it is basically NP-complete. Thus, many variants based on approximation techniques, especially those inspired by Swarm Intelligence (SI) have been proposed. This paper proposes a machine learning algorithm to guide the cloud choose the scheduling technique by using multi criteria decision to optimize the performance. The main contribution of our work is to minimize the makespan of a given task set. The new strategy is simulated using the CloudSim toolkit package where the impact of the algorithm is checked with different numbers of VMs varying from 2 to 50, and different task sizes between 30 bytes and 2700 bytes. Experiment results show that the proposed algorithm minimizes the execution time and the makespan between 7% and 75%, and improves the performance of the load balancing scheduling.