Visible to the public Biblio

Found 289 results

Filters: Keyword is Optimization  [Clear All Filters]
2020-12-14
Xu, S., Ouyang, Z., Feng, J..  2020.  An Improved Multi-objective Particle Swarm Optimization. 2020 5th International Conference on Computational Intelligence and Applications (ICCIA). :19–23.
For solving multi-objective optimization problems, this paper firstly combines a multi-objective evolutionary algorithm based on decomposition (MOEA/D) with good convergence and non-dominated sorting genetic algorithm II (NSGA-II) with good distribution to construct. Thus we propose a hybrid multi-objective optimization solving algorithm. Then, we consider that the population diversity needs to be improved while applying multi-objective particle swarm optimization (MOPSO) to solve the multi-objective optimization problems and an improved MOPSO algorithm is proposed. We give the distance function between the individual and the population, and the individual with the largest distance is selected as the global optimal individual to maintain population diversity. Finally, the simulation experiments are performed on the ZDT\textbackslashtextbackslashDTLZ test functions and track planning problems. The results indicate the better performance of the improved algorithms.
Cai, L., Hou, Y., Zhao, Y., Wang, J..  2020.  Application research and improvement of particle swarm optimization algorithm. 2020 IEEE International Conference on Power, Intelligent Computing and Systems (ICPICS). :238–241.
Particle swarm optimization (PSO), as a kind of swarm intelligence algorithm, has the advantages of simple algorithm principle, less programmable parameters and easy programming. Many scholars have applied particle swarm optimization (PSO) to various fields through learning it, and successfully solved linear problems, nonlinear problems, multiobjective optimization and other problems. However, the algorithm also has obvious problems in solving problems, such as slow convergence speed, too early maturity, falling into local optimization in advance, etc., which makes the convergence speed slow, search the optimal value accuracy is not high, and the optimization effect is not ideal. Therefore, many scholars have improved the particle swarm optimization algorithm. Taking into account the improvement ideas proposed by scholars in the early stage and the shortcomings still existing in the improvement, this paper puts forward the idea of improving particle swarm optimization algorithm in the future.
Gu, Y., Liu, N..  2020.  An Adaptive Grey Wolf Algorithm Based on Population System and Bacterial Foraging Algorithm. 2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). :744–748.
In this thesis, an modified algorithm for grey wolf optimization in swarm intelligence optimization algorithm is proposed, which is called an adaptive grey wolf algorithm (AdGWO) based on population system and bacterial foraging optimization algorithm (BFO). In view of the disadvantages of premature convergence and local optimization in solving complex optimization problems, the AdGWO algorithm uses a three-stage nonlinear change function to simulate the decreasing change of the convergence factor, and at the same time integrates the half elimination mechanism of the BFO. These improvements are more in line with the actual situation of natural wolves. The algorithm is based on 23 famous test functions and compared with GWO. Experimental results demonstrate that this algorithm is able to avoid sinking into the local optimum, has good accuracy and stability, is a more competitive algorithm.
Tousi, S. Mohamad Ali, Mostafanasab, A., Teshnehlab, M..  2020.  Design of Self Tuning PID Controller Based on Competitional PSO. 2020 4th Conference on Swarm Intelligence and Evolutionary Computation (CSIEC). :022–026.
In this work, a new particle swarm optimization (PSO)-based optimization algorithm, and the idea of a running match is introduced and employed in a non-linear system PID controller design. This algorithm aims to modify the formula of velocity calculating of the general PSO method to increase the diversity of the searching process. In this process of designing an optimal PID controller for a non-linear system, the three gains of the PID controller form a particle, which is a parameter vector and will be updated iteratively. Many of those particles then form a population. To reach the PID gains which are optimum, using modified velocity updating formula and position updating formula, the position of all particles of the population will be moved into the optimization direction. In the meanwhile, an objective function may be minimized as the performance of the controller get improved. To corroborate the controller functioning of this method, a non-linear system known as inverted pendulum will be controlled by the designed PID controller. The results confirm that the new method can show excellent performance in the non-linear PID controller design task.
Deng, M., Wu, X., Feng, P., Zeng, W..  2020.  Sparse Support Vector Machine for Network Behavior Anomaly Detection. 2020 IEEE 8th International Conference on Information, Communication and Networks (ICICN). :199–204.
Network behavior anomaly detection (NBAD) require fast mechanisms for learning from the large scale data. However, the training velocity of general machine learning approach is largely limited by the adopted training weights of all features in the NBAD. In this paper, we notice, however, that the related weights matching of NBAD features is sparse, which is not necessary for holding all weights. Hence, in this paper, we consider an efficient support vector machine (SVM) approach for NBAD by imposing 1 -norm. Essentially, we propose to use sparse SVM (S-SVM), where sparsity in model, i.e. in weights is used to interfere with special feature selection and that can achieve feature selection and classification efficiently.
2020-12-07
Jeong, T., Mandal, A..  2018.  Flexible Selecting of Style to Content Ratio in Neural Style Transfer. 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA). :264–269.

Humans have created many pioneers of art from the beginning of time. There are not many notable achievements by an artificial intelligence to create something visually captivating in the field of art. However, some breakthroughs were made in the past few years by learning the differences between the content and style of an image using convolution neural networks and texture synthesis. But most of the approaches have the limitations on either processing time, choosing a certain style image or altering the weight ratio of style image. Therefore, we are to address these restrictions and provide a system which allows any style image selection with a user defined style weight ratio in minimum time possible.

2020-12-02
Abeysekara, P., Dong, H., Qin, A. K..  2019.  Machine Learning-Driven Trust Prediction for MEC-Based IoT Services. 2019 IEEE International Conference on Web Services (ICWS). :188—192.

We propose a distributed machine-learning architecture to predict trustworthiness of sensor services in Mobile Edge Computing (MEC) based Internet of Things (IoT) services, which aligns well with the goals of MEC and requirements of modern IoT systems. The proposed machine-learning architecture models training a distributed trust prediction model over a topology of MEC-environments as a Network Lasso problem, which allows simultaneous clustering and optimization on large-scale networked-graphs. We then attempt to solve it using Alternate Direction Method of Multipliers (ADMM) in a way that makes it suitable for MEC-based IoT systems. We present analytical and simulation results to show the validity and efficiency of the proposed solution.

2020-12-01
Wang, S., Mei, Y., Park, J., Zhang, M..  2019.  A Two-Stage Genetic Programming Hyper-Heuristic for Uncertain Capacitated Arc Routing Problem. 2019 IEEE Symposium Series on Computational Intelligence (SSCI). :1606—1613.

Genetic Programming Hyper-heuristic (GPHH) has been successfully applied to automatically evolve effective routing policies to solve the complex Uncertain Capacitated Arc Routing Problem (UCARP). However, GPHH typically ignores the interpretability of the evolved routing policies. As a result, GP-evolved routing policies are often very complex and hard to be understood and trusted by human users. In this paper, we aim to improve the interpretability of the GP-evolved routing policies. To this end, we propose a new Multi-Objective GP (MOGP) to optimise the performance and size simultaneously. A major issue here is that the size is much easier to be optimised than the performance, and the search tends to be biased to the small but poor routing policies. To address this issue, we propose a simple yet effective Two-Stage GPHH (TS-GPHH). In the first stage, only the performance is to be optimised. Then, in the second stage, both objectives are considered (using our new MOGP). The experimental results showed that TS-GPHH could obtain much smaller and more interpretable routing policies than the state-of-the-art single-objective GPHH, without deteriorating the performance. Compared with traditional MOGP, TS-GPHH can obtain a much better and more widespread Pareto front.

2020-11-30
Ray, K., Banerjee, A., Mohalik, S. K..  2019.  Web Service Selection with Correlations: A Feature-Based Abstraction Refinement Approach. 2019 IEEE 12th Conference on Service-Oriented Computing and Applications (SOCA). :33–40.
In this paper, we address the web service selection problem for linear workflows. Given a linear workflow specifying a set of ordered tasks and a set of candidate services providing different features for each task, the selection problem deals with the objective of selecting the most eligible service for each task, given the ordering specified. A number of approaches to solving the selection problem have been proposed in literature. With web services growing at an incredible pace, service selection at the Internet scale has resurfaced as a problem of recent research interest. In this work, we present our approach to the selection problem using an abstraction refinement technique to address the scalability limitations of contemporary approaches. Experiments on web service benchmarks show that our approach can add substantial performance benefits in terms of space when compared to an approach without our optimization.
Cheng, D., Zhou, X., Ding, Z., Wang, Y., Ji, M..  2019.  Heterogeneity Aware Workload Management in Distributed Sustainable Datacenters. IEEE Transactions on Parallel and Distributed Systems. 30:375–387.
The tremendous growth of cloud computing and large-scale data analytics highlight the importance of reducing datacenter power consumption and environmental impact of brown energy. While many Internet service operators have at least partially powered their datacenters by green energy, it is challenging to effectively utilize green energy due to the intermittency of renewable sources, such as solar or wind. We find that the geographical diversity of internet-scale services can be carefully scheduled to improve the efficiency of applying green energy in datacenters. In this paper, we propose a holistic heterogeneity-aware cloud workload management approach, sCloud, that aims to maximize the system goodput in distributed self-sustainable datacenters. sCloud adaptively places the transactional workload to distributed datacenters, allocates the available resource to heterogeneous workloads in each datacenter, and migrates batch jobs across datacenters, while taking into account the green power availability and QoS requirements. We formulate the transactional workload placement as a constrained optimization problem that can be solved by nonlinear programming. Then, we propose a batch job migration algorithm to further improve the system goodput when the green power supply varies widely at different locations. Finally, we extend sCloud by integrating a flexible batch job manager to dynamically control the job execution progress without violating the deadlines. We have implemented sCloud in a university cloud testbed with real-world weather conditions and workload traces. Experimental results demonstrate sCloud can achieve near-to-optimal system performance while being resilient to dynamic power availability. sCloud with the flexible batch job management approach outperforms a heterogeneity-oblivious approach by 37 percent in improving system goodput and 33 percent in reducing QoS violations.
2020-11-20
Sun, Y., Wang, J., Lu, Z..  2019.  Asynchronous Parallel Surrogate Optimization Algorithm Based on Ensemble Surrogating Model and Stochastic Response Surface Method. :74—84.
{Surrogate model-based optimization algorithm remains as an important solution to expensive black-box function optimization. The introduction of ensemble model enables the algorithm to automatically choose a proper model integration mode and adapt to various parameter spaces when dealing with different problems. However, this also significantly increases the computational burden of the algorithm. On the other hand, utilizing parallel computing resources and improving efficiency of black-box function optimization also require combination with surrogate optimization algorithm in order to design and realize an efficient parallel parameter space sampling mechanism. This paper makes use of parallel computing technology to speed up the weight updating related computation for the ensemble model based on Dempster-Shafer theory, and combines it with stochastic response surface method to develop a novel parallel sampling mechanism for asynchronous parameter optimization. Furthermore, it designs and implements corresponding parallel computing framework and applies the developed algorithm to quantitative trading strategy tuning in financial market. It is verified that the algorithm is both feasible and effective in actual application. The experiment demonstrates that with guarantee of optimizing performance, the parallel optimization algorithm can achieve excellent accelerating effect.
2020-11-17
Agadakos, I., Ciocarlie, G. F., Copos, B., George, J., Leslie, N., Michaelis, J..  2019.  Security for Resilient IoBT Systems: Emerging Research Directions. IEEE INFOCOM 2019 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). :1—6.

Continued advances in IoT technology have prompted new investigation into its usage for military operations, both to augment and complement existing military sensing assets and support next-generation artificial intelligence and machine learning systems. Under the emerging Internet of Battlefield Things (IoBT) paradigm, a multitude of operational conditions (e.g., diverse asset ownership, degraded networking infrastructure, adversary activities) necessitate the development of novel security techniques, centered on establishment of trust for individual assets and supporting resilience of broader systems. To advance current IoBT efforts, a set of research directions are proposed that aim to fundamentally address the issues of trust and trustworthiness in contested battlefield environments, building on prior research in the cybersecurity domain. These research directions focus on two themes: (1) Supporting trust assessment for known/unknown IoT assets; (2) Ensuring continued trust of known IoBT assets and systems.

2020-11-04
Khalid, F., Hanif, M. A., Rehman, S., Ahmed, R., Shafique, M..  2019.  TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks. 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design (IOLTS). :188—193.

Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both “subjective” and “objective” quality tests.

2020-11-02
Shen, Hanji, Long, Chun, Li, Jun, Wan, Wei, Song, Xiaofan.  2018.  A Method for Performance Optimization of Virtual Network I/O Based on DPDK-SRIOV*. 2018 IEEE International Conference on Information and Automation (ICIA). :1550—1554.
Network security testing devices play important roles in Cyber security. Most of the current network security testing devices are based on proprietary hardware, however, the virtual network security tester needs high network I/O throughput performance. Therefore, the solution of the problem, which provides high-performance network I/O in the virtual scene will be explained in this paper. The method we proposed for virtualized network I/O performance optimization on a general hardware platform is able to achieve the I/O throughput performance of the proprietary hardware. The Single Root I/O Virtualization (SRIOV) of the physical network card is divided into a plurality of virtual network function of VF, furthermore, it can be added to different VF and VM. Extensive experiment illustrated that the virtualization and the physical network card sharing based on hardware are realized, and they can be used by Data Plane Development Kit (DPDK) and SRIOV technology. Consequently, the test instrument applications in virtual machines achieves the rate of 10Gps and meet the I/O requirement.
2020-10-26
Uyan, O. Gokhan, Gungor, V. Cagri.  2019.  Lifetime Analysis of Underwater Wireless Networks Concerning Privacy with Energy Harvesting and Compressive Sensing. 2019 27th Signal Processing and Communications Applications Conference (SIU). :1–4.
Underwater sensor networks (UWSN) are a division of classical wireless sensor networks (WSN), which are designed to accomplish both military and civil operations, such as invasion detection and underwater life monitoring. Underwater sensor nodes operate using the energy provided by integrated limited batteries, and it is a serious challenge to replace the battery under the water especially in harsh conditions with a high number of sensor nodes. Here, energy efficiency confronts as a very important issue. Besides energy efficiency, data privacy is another essential topic since UWSN typically generate delicate sensing data. UWSN can be vulnerable to silent positioning and listening, which is injecting similar adversary nodes into close locations to the network to sniff transmitted data. In this paper, we discuss the usage of compressive sensing (CS) and energy harvesting (EH) to improve the lifetime of the network whilst we suggest a novel encryption decision method to maintain privacy of UWSN. We also deploy a Mixed Integer Programming (MIP) model to optimize the encryption decision cases which leads to an improved network lifetime.
2020-10-05
Rafati, Jacob, DeGuchy, Omar, Marcia, Roummel F..  2018.  Trust-Region Minimization Algorithm for Training Responses (TRMinATR): The Rise of Machine Learning Techniques. 2018 26th European Signal Processing Conference (EUSIPCO). :2015—2019.

Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.

Parvina, Hashem, Moradi, Parham, Esmaeilib, Shahrokh, Jalilic, Mahdi.  2018.  An Efficient Recommender System by Integrating Non-Negative Matrix Factorization With Trust and Distrust Relationships. 2018 IEEE Data Science Workshop (DSW). :135—139.

Matrix factorization (MF) has been proved to be an effective approach to build a successful recommender system. However, most current MF-based recommenders cannot obtain high prediction accuracy due to the sparseness of user-item matrix. Moreover, these methods suffer from the scalability issues when applying on large-scale real-world tasks. To tackle these issues, in this paper a social regularization method called TrustRSNMF is proposed that incorporates the social trust information of users in nonnegative matrix factorization framework. The proposed method integrates trust statements along with user-item ratings as an additional information source into the recommendation model to deal with the data sparsity and cold-start issues. In order to evaluate the effectiveness of the proposed method, a number of experiments are performed on two real-world datasets. The obtained results demonstrate significant improvements of the proposed method compared to state-of-the-art recommendation methods.

2020-09-28
Park, Seok-Hwan, Simeone, Osvaldo, Shamai Shitz, Shlomo.  2018.  Optimizing Spectrum Pooling for Multi-Tenant C-RAN Under Privacy Constraints. 2018 IEEE 19th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC). :1–5.
This work studies the optimization of spectrum pooling for the downlink of a multi-tenant Cloud Radio Access Network (C-RAN) system in the presence of inter-tenant privacy constraints. The spectrum available for downlink transmission is partitioned into private and shared subbands, and the participating operators cooperate to serve the user equipments (UEs) on the shared subband. The network of each operator consists of a cloud processor (CP) that is connected to proprietary radio units (RUs) by means of finite-capacity fronthaul links. In order to enable inter-operator cooperation, the CPs of the participating operators are also connected by finite-capacity backhaul links. Inter-operator cooperation may hence result in loss of privacy. The problem of optimizing the bandwidth allocation, precoding, and fronthaul/backhaul compression strategies is tackled under constraints on backhaul and fronthaul capacity, as well as on per-RU transmit power and inter-onerator privacy.
Zhang, Xueru, Khalili, Mohammad Mahdi, Liu, Mingyan.  2018.  Recycled ADMM: Improve Privacy and Accuracy with Less Computation in Distributed Algorithms. 2018 56th Annual Allerton Conference on Communication, Control, and Computing (Allerton). :959–965.
Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems. In distributed settings, each node performs computation with its local data and the local results are exchanged among neighboring nodes in an iterative fashion. During this iterative process the leakage of data privacy arises and can accumulate significantly over many iterations, making it difficult to balance the privacy-utility tradeoff. In this study we propose Recycled ADMM (R-ADMM), where a linear approximation is applied to every even iteration, its solution directly calculated using only results from the previous, odd iteration. It turns out that under such a scheme, half of the updates incur no privacy loss and require much less computation compared to the conventional ADMM. We obtain a sufficient condition for the convergence of R-ADMM and provide the privacy analysis based on objective perturbation.
2020-09-21
Zhang, Bing, Zhao, Yongli, Yan, Boyuan, Yan, Longchuan, WANG, YING, Zhang, Jie.  2019.  Failure Disposal by Interaction of the Cross-Layer Artificial Intelligence on ONOS-Based SDON Platform. 2019 Optical Fiber Communications Conference and Exhibition (OFC). :1–3.
We propose a new architecture introducing AI to span the control layer and the data layer in SDON. This demonstration shows the cooperation of the AI engines in two layers in dealing with failure disposal.
2020-09-08
Ma, Zhaohui, Yang, Yan.  2019.  Optimization Strategy of Flow Table Storage Based on “Betweenness Centrality”. 2019 IEEE International Conference on Power Data Science (ICPDS). :76–79.
With the gradual progress of cloud computing, big data, network virtualization and other network technology. The traditional network architecture can no longer support this huge business. At this time, the clean slate team defined a new network architecture, SDN (Software Defined Network). It has brought about tremendous changes in the development of today's networks. The controller sends the flow table down to the switch, and the data flow is forwarded through matching flow table items. However, the current flow table resources of the SDN switch are very limited. Therefore, this paper studies the technology of the latest SDN Flow table optimization at home and abroad, proposes an efficient optimization scheme of Flow table item on the betweenness centrality through the main road selection algorithm, and realizes related applications by setting up experimental topology. Experiments show that this scheme can greatly reduce the number of flow table items of switches, especially the more hosts there are in the topology, the more obvious the experimental effect is. And the experiment proves that the optimization success rate is over 80%.
2020-09-04
Zhao, Pu, Liu, Sijia, Chen, Pin-Yu, Hoang, Nghia, Xu, Kaidi, Kailkhura, Bhavya, Lin, Xue.  2019.  On the Design of Black-Box Adversarial Examples by Leveraging Gradient-Free Optimization and Operator Splitting Method. 2019 IEEE/CVF International Conference on Computer Vision (ICCV). :121—130.
Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations. Despite the long-term vision, however, existing studies on black-box adversarial attacks are still restricted to very specific settings of threat models (e.g., single distortion metric and restrictive assumption on target model's feedback to queries) and/or suffer from prohibitively high query complexity. To push for further advances in this field, we introduce a general framework based on an operator splitting method, the alternating direction method of multipliers (ADMM) to devise efficient, robust black-box attacks that work with various distortion metrics and feedback settings without incurring high query complexity. Due to the black-box nature of the threat model, the proposed ADMM solution framework is integrated with zeroth-order (ZO) optimization and Bayesian optimization (BO), and thus is applicable to the gradient-free regime. This results in two new black-box adversarial attack generation methods, ZO-ADMM and BO-ADMM. Our empirical evaluations on image classification datasets show that our proposed approaches have much lower function query complexities compared to state-of-the-art attack methods, but achieve very competitive attack success rates.
Osia, Seyed Ali, Rassouli, Borzoo, Haddadi, Hamed, Rabiee, Hamid R., Gündüz, Deniz.  2019.  Privacy Against Brute-Force Inference Attacks. 2019 IEEE International Symposium on Information Theory (ISIT). :637—641.
Privacy-preserving data release is about disclosing information about useful data while retaining the privacy of sensitive data. Assuming that the sensitive data is threatened by a brute-force adversary, we define Guessing Leakage as a measure of privacy, based on the concept of guessing. After investigating the properties of this measure, we derive the optimal utility-privacy trade-off via a linear program with any f-information adopted as the utility measure, and show that the optimal utility is a concave and piece-wise linear function of the privacy-leakage budget.
Zhang, Xiao, Wang, Yanqiu, Wang, Qing, Zhao, Xiaonan.  2019.  A New Approach to Double I/O Performance for Ceph Distributed File System in Cloud Computing. 2019 2nd International Conference on Data Intelligence and Security (ICDIS). :68—75.
Block storage resources are essential in an Infrastructure-as-a-Service(IaaS) cloud computing system. It is used for storing virtual machines' images. It offers persistent storage service even the virtual machine is off. Distribute storage systems are used to provide block storage services in IaaS, such as Amazon EBS, Cinder, Ceph, Sheepdog. Ceph is widely used as the backend block storage service of OpenStack platform. It converts block devices into objects with the same size and saves them on the local file system. The performance of block devices provided by Ceph is only 30% of hard disks in many cases. One of the key issues that affect the performance of Ceph is the three replicas for fault tolerance. But our research finds that replicas are not the real reason slow down the performance. In this paper, we present a new approach to accelerate the IO operations. The experiment results show that by using our storage engine, Ceph can offer faster IO performance than the hard disk in most cases. Our new storage engine provides more than three times up than the original one.
2020-08-28
Eom, Taehoon, Hong, Jin Bum, An, SeongMo, Park, Jong Sou, Kim, Dong Seong.  2019.  Security and Performance Modeling and Optimization for Software Defined Networking. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :610—617.

Software Defined Networking (SDN) provides new functionalities to efficiently manage the network traffic, which can be used to enhance the networking capabilities to support the growing communication demands today. But at the same time, it introduces new attack vectors that can be exploited by attackers. Hence, evaluating and selecting countermeasures to optimize the security of the SDN is of paramount importance. However, one should also take into account the trade-off between security and performance of the SDN. In this paper, we present a security optimization approach for the SDN taking into account the trade-off between security and performance. We evaluate the security of the SDN using graphical security models and metrics, and use queuing models to measure the performance of the SDN. Further, we use Genetic Algorithms, namely NSGA-II, to optimally select the countermeasure with performance and security constraints. Our experimental analysis results show that the proposed approach can efficiently compute the countermeasures that will optimize the security of the SDN while satisfying the performance constraints.