Visible to the public Biblio

Found 133 results

Filters: Keyword is Algorithm design and analysis  [Clear All Filters]
2014
Lei Xu, Chunxiao Jiang, Jian Wang, Jian Yuan, Yong Ren.  2014.  Information Security in Big Data: Privacy and Data Mining. Access, IEEE. 2:1149-1176.

The growing popularity and development of data mining technologies bring serious threat to the security of individual,'s sensitive information. An emerging research topic in data mining, known as privacy-preserving data mining (PPDM), has been extensively studied in recent years. The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data. Current studies of PPDM mainly focus on how to reduce the privacy risk brought by data mining operations, while in fact, unwanted disclosure of sensitive information may also happen in the process of data collecting, data publishing, and information (i.e., the data mining results) delivering. In this paper, we view the privacy issues related to data mining from a wider perspective and investigate various approaches that can help to protect sensitive information. In particular, we identify four different types of users involved in data mining applications, namely, data provider, data collector, data miner, and decision maker. For each type of user, we discuss his privacy concerns and the methods that can be adopted to protect sensitive information. We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions. Besides exploring the privacy-preserving approaches for each type of user, we also review the game theoretical approaches, which are proposed for analyzing the interactions among different users in a data mining scenario, each of whom has his own valuation on the sensitive information. By differentiating the responsibilities of different users with respect to security of sensitive information, we would like to provide some useful insights into the study of PPDM.

Stanisavljevic, Z., Stanisavljevic, J., Vuletic, P., Jovanovic, Z..  2014.  COALA - System for Visual Representation of Cryptography Algorithms. Learning Technologies, IEEE Transactions on. 7:178-190.

Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie-Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year.
 

Wenbing Zhao.  2014.  Application-Aware Byzantine Fault Tolerance. Dependable, Autonomic and Secure Computing (DASC), 2014 IEEE 12th International Conference on. :45-50.

Byzantine fault tolerance has been intensively studied over the past decade as a way to enhance the intrusion resilience of computer systems. However, state-machine-based Byzantine fault tolerance algorithms require deterministic application processing and sequential execution of totally ordered requests. One way of increasing the practicality of Byzantine fault tolerance is to exploit the application semantics, which we refer to as application-aware Byzantine fault tolerance. Application-aware Byzantine fault tolerance makes it possible to facilitate concurrent processing of requests, to minimize the use of Byzantine agreement, and to identify and control replica nondeterminism. In this paper, we provide an overview of recent works on application-aware Byzantine fault tolerance techniques. We elaborate the need for exploiting application semantics for Byzantine fault tolerance and the benefits of doing so, provide a classification of various approaches to application-aware Byzantine fault tolerance, and outline the mechanisms used in achieving application-aware Byzantine fault tolerance according to our classification.

Wenbing Zhao.  2014.  Application-Aware Byzantine Fault Tolerance. Dependable, Autonomic and Secure Computing (DASC), 2014 IEEE 12th International Conference on. :45-50.

Byzantine fault tolerance has been intensively studied over the past decade as a way to enhance the intrusion resilience of computer systems. However, state-machine-based Byzantine fault tolerance algorithms require deterministic application processing and sequential execution of totally ordered requests. One way of increasing the practicality of Byzantine fault tolerance is to exploit the application semantics, which we refer to as application-aware Byzantine fault tolerance. Application-aware Byzantine fault tolerance makes it possible to facilitate concurrent processing of requests, to minimize the use of Byzantine agreement, and to identify and control replica nondeterminism. In this paper, we provide an overview of recent works on application-aware Byzantine fault tolerance techniques. We elaborate the need for exploiting application semantics for Byzantine fault tolerance and the benefits of doing so, provide a classification of various approaches to application-aware Byzantine fault tolerance, and outline the mechanisms used in achieving application-aware Byzantine fault tolerance according to our classification.

Chia-Feng Juang, Chi-Wei Hung, Chia-Hung Hsu.  2014.  Rule-Based Cooperative Continuous Ant Colony Optimization to Improve the Accuracy of Fuzzy System Design. Fuzzy Systems, IEEE Transactions on. 22:723-735.

This paper proposes a cooperative continuous ant colony optimization (CCACO) algorithm and applies it to address the accuracy-oriented fuzzy systems (FSs) design problems. All of the free parameters in a zero- or first-order Takagi-Sugeno-Kang (TSK) FS are optimized through CCACO. The CCACO algorithm performs optimization through multiple ant colonies, where each ant colony is only responsible for optimizing the free parameters in a single fuzzy rule. The ant colonies cooperate to design a complete FS, with a complete parameter solution vector (encoding a complete FS) that is formed by selecting a subsolution component (encoding a single fuzzy rule) from each colony. Subsolutions in each ant colony are evolved independently using a new continuous ant colony optimization algorithm. In the CCACO, solutions are updated via the techniques of pheromone-based tournament ant path selection, ant wandering operation, and best-ant-attraction refinement. The performance of the CCACO is verified through applications to fuzzy controller and predictor design problems. Comparisons with other population-based optimization algorithms verify the superiority of the CCACO.

Kun Wen, Jiahai Yang, Fengjuan Cheng, Chenxi Li, Ziyu Wang, Hui Yin.  2014.  Two-stage detection algorithm for RoQ attack based on localized periodicity analysis of traffic anomaly. Computer Communication and Networks (ICCCN), 2014 23rd International Conference on. :1-6.

Reduction of Quality (RoQ) attack is a stealthy denial of service attack. It can decrease or inhibit normal TCP flows in network. Victims are hard to perceive it as the final network throughput is decreasing instead of increasing during the attack. Therefore, the attack is strongly hidden and it is difficult to be detected by existing detection systems. Based on the principle of Time-Frequency analysis, we propose a two-stage detection algorithm which combines anomaly detection with misuse detection. In the first stage, we try to detect the potential anomaly by analyzing network traffic through Wavelet multiresolution analysis method. According to different time-domain characteristics, we locate the abrupt change points. In the second stage, we further analyze the local traffic around the abrupt change point. We extract the potential attack characteristics by autocorrelation analysis. By the two-stage detection, we can ultimately confirm whether the network is affected by the attack. Results of simulations and real network experiments demonstrate that our algorithm can detect RoQ attacks, with high accuracy and high efficiency.

Shaohua Tang, Lingling Xu, Niu Liu, Xinyi Huang, Jintai Ding, Zhiming Yang.  2014.  Provably Secure Group Key Management Approach Based upon Hyper-Sphere. Parallel and Distributed Systems, IEEE Transactions on. 25:3253-3263.

Secure group communication systems have become increasingly important for many emerging network applications. An efficient and robust group key management approach is indispensable to a secure group communication system. Motivated by the theory of hyper-sphere, this paper presents a new group key management approach with a group controller (GC). In our new design, a hyper-sphere is constructed for a group and each member in the group corresponds to a point on the hyper-sphere, which is called the member's private point. The GC computes the central point of the hyper-sphere, intuitively, whose “distance” from each member's private point is identical. The central point is published such that each member can compute a common group key, using a function by taking each member's private point and the central point of the hyper-sphere as the input. This approach is provably secure under the pseudo-random function (PRF) assumption. Compared with other similar schemes, by both theoretical analysis and experiments, our scheme (1) has significantly reduced memory and computation load for each group member; (2) can efficiently deal with massive membership change with only two re-keying messages, i.e., the central point of the hyper-sphere and a random number; and (3) is efficient and very scalable for large-size groups.

Xiong Xu, Yanfei Zhong, Liangpei Zhang.  2014.  Adaptive Subpixel Mapping Based on a Multiagent System for Remote-Sensing Imagery. Geoscience and Remote Sensing, IEEE Transactions on. 52:787-804.

The existence of mixed pixels is a major problem in remote-sensing image classification. Although the soft classification and spectral unmixing techniques can obtain an abundance of different classes in a pixel to solve the mixed pixel problem, the subpixel spatial attribution of the pixel will still be unknown. The subpixel mapping technique can effectively solve this problem by providing a fine-resolution map of class labels from coarser spectrally unmixed fraction images. However, most traditional subpixel mapping algorithms treat all mixed pixels as an identical type, either boundary-mixed pixel or linear subpixel, leading to incomplete and inaccurate results. To improve the subpixel mapping accuracy, this paper proposes an adaptive subpixel mapping framework based on a multiagent system for remote-sensing imagery. In the proposed multiagent subpixel mapping framework, three kinds of agents, namely, feature detection agents, subpixel mapping agents and decision agents, are designed to solve the subpixel mapping problem. Experiments with artificial images and synthetic remote-sensing images were performed to evaluate the performance of the proposed subpixel mapping algorithm in comparison with the hard classification method and other subpixel mapping algorithms: subpixel mapping based on a back-propagation neural network and the spatial attraction model. The experimental results indicate that the proposed algorithm outperforms the other two subpixel mapping algorithms in reconstructing the different structures in mixed pixels.
 

Vollala, S., Varadhan, V.V., Geetha, K., Ramasubramanian, N..  2014.  Efficient modular multiplication algorithms for public key cryptography. Advance Computing Conference (IACC), 2014 IEEE International. :74-78.

The modular exponentiation is an important operation for cryptographic transformations in public key cryptosystems like the Rivest, Shamir and Adleman, the Difie and Hellman and the ElGamal schemes. computing ax mod n and axby mod n for very large x,y and n are fundamental to the efficiency of almost all pubic key cryptosystems and digital signature schemes. To achieve high level of security, the word length in the modular exponentiations should be significantly large. The performance of public key cryptography is primarily determined by the implementation efficiency of the modular multiplication and exponentiation. As the words are usually large, and in order to optimize the time taken by these operations, it is essential to minimize the number of modular multiplications. In this paper we are presenting efficient algorithms for computing ax mod n and axbymod n. In this work we propose four algorithms to evaluate modular exponentiation. Bit forwarding (BFW) algorithms to compute ax mod n, and to compute axby mod n two algorithms namely Substitute and reward (SRW), Store and forward(SFW) are proposed. All the proposed algorithms are efficient in terms of time and at the same time demands only minimal additional space to store the pre-computed values. These algorithms are suitable for devices with low computational power and limited storage.
 

Kishore, N., Kapoor, B..  2014.  An efficient parallel algorithm for hash computation in security and forensics applications. Advance Computing Conference (IACC), 2014 IEEE International. :873-877.


Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
 

Kishore, N., Kapoor, B..  2014.  An efficient parallel algorithm for hash computation in security and forensics applications. Advance Computing Conference (IACC), 2014 IEEE International. :873-877.

Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.

Bo Chai, Zaiyue Yang, Jiming Chen.  2014.  Impacts of unreliable communication and regret matching based anti-jamming approach in smart grid. Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES. :1-5.

Demand response management (DRM) is one of the main features in smart grid, which is realized via communications between power providers and consumers. Due to the vulnerabilities of communication channels, communication is not perfect in practice and will be threatened by jamming attack. In this paper, we consider jamming attack in the wireless communication for smart grid. Firstly, the DRM performance degradation introduced by unreliable communication is fully studied. Secondly, a regret matching based anti-jamming algorithm is proposed to enhance the performance of communication and DRM. Finally, numerical results are presented to illustrate the impacts of unreliable communication on DRM and the performance of the proposed anti-jamming algorithm.

Patil, M., Sahu, V., Jain, A..  2014.  SMS text Compression and Encryption on Android O.S. Computer Communication and Informatics (ICCCI), 2014 International Conference on. :1-6.

Today in the world of globalization mobile communication is one of the fastest growing medium though which one sender can interact with other in short time. During the transmission of data from sender to receiver, size of data is important, since more data takes more time. But one of the limitations of sending data through mobile devices is limited use of bandwidth and number of packets transmitted. Also the security of these data is important. Hence various protocols are implemented which not only provides security to the data but also utilizes bandwidth. Here we proposed an efficient technique of sending SMS text using combination of compression and encryption. The data to be send is first encrypted using Elliptic curve Cryptographic technique, but encryption increases the size of the text data, hence compression is applied to this encrypted data so the data gets compressed and is send in short time. The Compression technique implemented here is an efficient one since it includes an algorithm which compresses the text by 99.9%, hence a great amount of bandwidth gets saved.The hybrid technique of Compression-Encryption of SMS text message is implemented for Android Operating Systems.

Zonouz, S., Davis, C.M., Davis, K.R., Berthier, R., Bobba, R.B., Sanders, W.H..  2014.  SOCCA: A Security-Oriented Cyber-Physical Contingency Analysis in Power Infrastructures. Smart Grid, IEEE Transactions on. 5:3-13.

Contingency analysis is a critical activity in the context of the power infrastructure because it provides a guide for resiliency and enables the grid to continue operating even in the case of failure. In this paper, we augment this concept by introducing SOCCA, a cyber-physical security evaluation technique to plan not only for accidental contingencies but also for malicious compromises. SOCCA presents a new unified formalism to model the cyber-physical system including interconnections among cyber and physical components. The cyber-physical contingency ranking technique employed by SOCCA assesses the potential impacts of events. Contingencies are ranked according to their impact as well as attack complexity. The results are valuable in both cyber and physical domains. From a physical perspective, SOCCA scores power system contingencies based on cyber network configuration, whereas from a cyber perspective, control network vulnerabilities are ranked according to the underlying power system topology.
 

Zonouz, S., Davis, C.M., Davis, K.R., Berthier, R., Bobba, R.B., Sanders, W.H..  2014.  SOCCA: A Security-Oriented Cyber-Physical Contingency Analysis in Power Infrastructures. Smart Grid, IEEE Transactions on. 5:3-13.

Contingency analysis is a critical activity in the context of the power infrastructure because it provides a guide for resiliency and enables the grid to continue operating even in the case of failure. In this paper, we augment this concept by introducing SOCCA, a cyber-physical security evaluation technique to plan not only for accidental contingencies but also for malicious compromises. SOCCA presents a new unified formalism to model the cyber-physical system including interconnections among cyber and physical components. The cyber-physical contingency ranking technique employed by SOCCA assesses the potential impacts of events. Contingencies are ranked according to their impact as well as attack complexity. The results are valuable in both cyber and physical domains. From a physical perspective, SOCCA scores power system contingencies based on cyber network configuration, whereas from a cyber perspective, control network vulnerabilities are ranked according to the underlying power system topology.

Bhotto, M.Z.A., Antoniou, A..  2014.  Affine-Projection-Like Adaptive-Filtering Algorithms Using Gradient-Based Step Size. Circuits and Systems I: Regular Papers, IEEE Transactions on. 61:2048-2056.

A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.

Peng Yi, Yiguang Hong.  2014.  Distributed continuous-time gradient-based algorithm for constrained optimization. Control Conference (CCC), 2014 33rd Chinese. :1563-1567.

In this paper, we consider distributed algorithm based on a continuous-time multi-agent system to solve constrained optimization problem. The global optimization objective function is taken as the sum of agents' individual objective functions under a group of convex inequality function constraints. Because the local objective functions cannot be explicitly known by all the agents, the problem has to be solved in a distributed manner with the cooperation between agents. Here we propose a continuous-time distributed gradient dynamics based on the KKT condition and Lagrangian multiplier methods to solve the optimization problem. We show that all the agents asymptotically converge to the same optimal solution with the help of a constructed Lyapunov function and a LaSalle invariance principle of hybrid systems.

Bayat-sarmadi, S., Mozaffari-Kermani, M., Reyhani-Masoleh, A..  2014.  Efficient and Concurrent Reliable Realization of the Secure Cryptographic SHA-3 Algorithm. Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on. 33:1105-1109.

The secure hash algorithm (SHA)-3 has been selected in 2012 and will be used to provide security to any application which requires hashing, pseudo-random number generation, and integrity checking. This algorithm has been selected based on various benchmarks such as security, performance, and complexity. In this paper, in order to provide reliable architectures for this algorithm, an efficient concurrent error detection scheme for the selected SHA-3 algorithm, i.e., Keccak, is proposed. To the best of our knowledge, effective countermeasures for potential reliability issues in the hardware implementations of this algorithm have not been presented to date. In proposing the error detection approach, our aim is to have acceptable complexity and performance overheads while maintaining high error coverage. In this regard, we present a low-complexity recomputing with rotated operands-based scheme which is a step-forward toward reducing the hardware overhead of the proposed error detection approach. Moreover, we perform injection-based fault simulations and show that the error coverage of close to 100% is derived. Furthermore, we have designed the proposed scheme and through ASIC analysis, it is shown that acceptable complexity and performance overheads are reached. By utilizing the proposed high-performance concurrent error detection scheme, more reliable and robust hardware implementations for the newly-standardized SHA-3 are realized.
 

Yunfeng Zhu, Lee, P.P.C., Yinlong Xu, Yuchong Hu, Liping Xiang.  2014.  On the Speedup of Recovery in Large-Scale Erasure-Coded Storage Systems. Parallel and Distributed Systems, IEEE Transactions on. 25:1830-1840.

Modern storage systems stripe redundant data across multiple nodes to provide availability guarantees against node failures. One form of data redundancy is based on XOR-based erasure codes, which use only XOR operations for encoding and decoding. In addition to tolerating failures, a storage system must also provide fast failure recovery to reduce the window of vulnerability. This work addresses the problem of speeding up the recovery of a single-node failure for general XOR-based erasure codes. We propose a replace recovery algorithm, which uses a hill-climbing technique to search for a fast recovery solution, such that the solution search can be completed within a short time period. We further extend the algorithm to adapt to the scenario where nodes have heterogeneous capabilities (e.g., processing power and transmission bandwidth). We implement our replace recovery algorithm atop a parallelized architecture to demonstrate its feasibility. We conduct experiments on a networked storage system testbed, and show that our replace recovery algorithm uses less recovery time than the conventional recovery approach.
 

Ghosh, S..  2014.  On the implementation of mceliece with CCA2 indeterminacy by SHA-3. Circuits and Systems (ISCAS), 2014 IEEE International Symposium on. :2804-2807.

This paper deals with the design and implementation of the post-quantum public-key algorithm McEliece. Seamless incorporation of a new error generator and new SHA-3 module provides higher indeterminacy and more randomization of the original McEliece algorithm and achieves CCA2 security standard. Due to the lightweight and high-speed implementation of SHA-3 module the proposed 128-bit secure McEliece architecture provides 6% higher performance in only 0.78 times area of the best known existing design.
 

Zhen Jiang, Shihong Miao, Pei Liu.  2014.  A Modified Empirical Mode Decomposition Filtering-Based Adaptive Phasor Estimation Algorithm for Removal of Exponentially Decaying DC Offset. Power Delivery, IEEE Transactions on. 29:1326-1334.

This paper proposes a modified empirical-mode decomposition (EMD) filtering-based adaptive dynamic phasor estimation algorithm for the removal of exponentially decaying dc offset. Discrete Fourier transform does not have the ability to attain the accurate phasor of the fundamental frequency component in digital protective relays under dynamic system fault conditions because the characteristic of exponentially decaying dc offset is not consistent. EMD is a fully data-driven, not model-based, adaptive filtering procedure for extracting signal components. But the original EMD technique has high computational complexity and requires a large data series. In this paper, a short data series-based EMD filtering procedure is proposed and an optimum hermite polynomial fitting (OHPF) method is used in this modified procedure. The proposed filtering technique has high accuracy and convergent speed, and is greatly appropriate for relay applications. This paper illustrates the characteristics of the proposed technique and evaluates its performance by computer-simulated signals, PSCAD/EMTDC-generated signals, and real power system fault signals.

Yu Li, Rui Dai, Junjie Zhang.  2014.  Morphing communications of Cyber-Physical Systems towards moving-target defense. Communications (ICC), 2014 IEEE International Conference on. :592-598.

Since the massive deployment of Cyber-Physical Systems (CPSs) calls for long-range and reliable communication services with manageable cost, it has been believed to be an inevitable trend to relay a significant portion of CPS traffic through existing networking infrastructures such as the Internet. Adversaries who have access to networking infrastructures can therefore eavesdrop network traffic and then perform traffic analysis attacks in order to identify CPS sessions and subsequently launch various attacks. As we can hardly prevent all adversaries from accessing network infrastructures, thwarting traffic analysis attacks becomes indispensable. Traffic morphing serves as an effective means towards this direction. In this paper, a novel traffic morphing algorithm, CPSMorph, is proposed to protect CPS sessions. CPSMorph maintains a number of network sessions whose distributions of inter-packet delays are statistically indistinguishable from those of typical network sessions. A CPS message will be sent through one of these sessions with assured satisfaction of its time constraint. CPSMorph strives to minimize the overhead by dynamically adjusting the morphing process. It is characterized by low complexity as well as high adaptivity to changing dynamics of CPS sessions. Experimental results have shown that CPSMorph can effectively performing traffic morphing for real-time CPS messages with moderate overhead.
 

Kia, S.S., Cortes, J., Martinez, S..  2014.  Periodic and event-triggered communication for distributed continuous-time convex optimization. American Control Conference (ACC), 2014. :5010-5015.

We propose a distributed continuous-time algorithm to solve a network optimization problem where the global cost function is a strictly convex function composed of the sum of the local cost functions of the agents. We establish that our algorithm, when implemented over strongly connected and weight-balanced directed graph topologies, converges exponentially fast when the local cost functions are strongly convex and their gradients are globally Lipschitz. We also characterize the privacy preservation properties of our algorithm and extend the convergence guarantees to the case of time-varying, strongly connected, weight-balanced digraphs. When the network topology is a connected undirected graph, we show that exponential convergence is still preserved if the gradients of the strongly convex local cost functions are locally Lipschitz, while it is asymptotic if the local cost functions are convex. We also study discrete-time communication implementations. Specifically, we provide an upper bound on the stepsize of a synchronous periodic communication scheme that guarantees convergence over connected undirected graph topologies and, building on this result, design a centralized event-triggered implementation that is free of Zeno behavior. Simulations illustrate our results.

Kia, S.S., Cortes, J., Martinez, S..  2014.  Periodic and event-triggered communication for distributed continuous-time convex optimization. American Control Conference (ACC), 2014. :5010-5015.

We propose a distributed continuous-time algorithm to solve a network optimization problem where the global cost function is a strictly convex function composed of the sum of the local cost functions of the agents. We establish that our algorithm, when implemented over strongly connected and weight-balanced directed graph topologies, converges exponentially fast when the local cost functions are strongly convex and their gradients are globally Lipschitz. We also characterize the privacy preservation properties of our algorithm and extend the convergence guarantees to the case of time-varying, strongly connected, weight-balanced digraphs. When the network topology is a connected undirected graph, we show that exponential convergence is still preserved if the gradients of the strongly convex local cost functions are locally Lipschitz, while it is asymptotic if the local cost functions are convex. We also study discrete-time communication implementations. Specifically, we provide an upper bound on the stepsize of a synchronous periodic communication scheme that guarantees convergence over connected undirected graph topologies and, building on this result, design a centralized event-triggered implementation that is free of Zeno behavior. Simulations illustrate our results.

Kampanakis, P., Perros, H., Beyene, T..  2014.  SDN-based solutions for Moving Target Defense network protection. A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on. :1-6.

Software-Defined Networking (SDN) allows network capabilities and services to be managed through a central control point. Moving Target Defense (MTD) on the other hand, introduces a constantly adapting environment in order to delay or prevent attacks on a system. MTD is a use case where SDN can be leveraged in order to provide attack surface obfuscation. In this paper, we investigate how SDN can be used in some network-based MTD techniques. We first describe the advantages and disadvantages of these techniques, the potential countermeasures attackers could take to circumvent them, and the overhead of implementing MTD using SDN. Subsequently, we study the performance of the SDN-based MTD methods using Cisco's One Platform Kit and we show that they significantly increase the attacker's overheads.