Biblio
Mobile Ad hoc Network (MANET) is the collection of mobile devices which could change the locations and configure themselves without a centralized base point. Mobile Ad hoc Networks are vulnerable to attacks due to its dynamic infrastructure. The routing attacks are one among the possible attacks that causes damage to MANET. This paper gives a new method of risk aware response technique which is combined version the Dijkstra's shortest path algorithm and Destination Sequenced Distance Vector (DSDV) algorithm. This can reduce black hole attacks. Dijkstra's algorithm finds the shortest path from the single source to the destination when the edges have positive weights. The DSDV is an improved version of the conventional technique by adding the sequence number and next hop address in each routing table.
Smart meters migrate conventional electricity grid into digitally enabled Smart Grid (SG), which is more reliable and efficient. Fine-grained energy consumption data collected by smart meters helps utility providers accurately predict users' demands and significantly reduce power generation cost, while it imposes severe privacy risks on consumers and may discourage them from using those “espionage meters". To enjoy the benefits of smart meter measured data without compromising the users' privacy, in this paper, we try to integrate distributed differential privacy (DDP) techniques into data-driven optimization, and propose a novel scheme that not only minimizes the cost for utility providers but also preserves the DDP of users' energy profiles. Briefly, we add differential private noises to the users' energy consumption data before the smart meters send it to the utility provider. Due to the uncertainty of the users' demand distribution, the utility provider aggregates a given set of historical users' differentially private data, estimates the users' demands, and formulates the data- driven cost minimization based on the collected noisy data. We also develop algorithms for feasible solutions, and verify the effectiveness of the proposed scheme through simulations using the simulated energy consumption data generated from the utility company's real data analysis.
With the exponential hike in cyber threats, organizations are now striving for better data mining techniques in order to analyze security logs received from their IT infrastructures to ensure effective and automated cyber threat detection. Machine Learning (ML) based analytics for security machine data is the next emerging trend in cyber security, aimed at mining security data to uncover advanced targeted cyber threats actors and minimizing the operational overheads of maintaining static correlation rules. However, selection of optimal machine learning algorithm for security log analytics still remains an impeding factor against the success of data science in cyber security due to the risk of large number of false-positive detections, especially in the case of large-scale or global Security Operations Center (SOC) environments. This fact brings a dire need for an efficient machine learning based cyber threat detection model, capable of minimizing the false detection rates. In this paper, we are proposing optimal machine learning algorithms with their implementation framework based on analytical and empirical evaluations of gathered results, while using various prediction, classification and forecasting algorithms.
Cloud Computing has many significant benefits like the provision of computing resources and virtual networks on demand. However, there is the problem to assure the security of these networks against Distributed Denial-of-Service (DDoS) attack. Over the past few decades, the development of protection method based on data mining has attracted many researchers because of its effectiveness and practical significance. Most commonly these detection methods use prelearned models or models based on rules. Because of this the proposed DDoS detection methods often failure in dynamically changing cloud virtual networks. In this paper, we purposed self-learning method allows to adapt a detection model to network changes. This is minimized the false detection and reduce the possibility to mark legitimate users as malicious and vice versa. The developed method consists of two steps: collecting data about the network traffic by Netflow protocol and relearning the detection model with the new data. During the data collection we separate the traffic on legitimate and malicious. The separated traffic is labeled and sent to the relearning pool. The detection model is relearned by a data from the pool of current traffic. The experiment results show that proposed method could increase efficiency of DDoS detection systems is using data mining.
We propose to use a genetic algorithm to evolve novel reconfigurable hardware to implement elliptic curve cryptographic combinational logic circuits. Elliptic curve cryptography offers high security-level with a short key length making it one of the most popular public-key cryptosystems. Furthermore, there are no known sub-exponential algorithms for solving the elliptic curve discrete logarithm problem. These advantages render elliptic curve cryptography attractive for incorporating in many future cryptographic applications and protocols. However, elliptic curve cryptography has proven to be vulnerable to non-invasive side-channel analysis attacks such as timing, power, visible light, electromagnetic, and acoustic analysis attacks. In this paper, we use a genetic algorithm to address this vulnerability by evolving combinational logic circuits that correctly implement elliptic curve cryptographic hardware that is also resistant to simple timing and power analysis attacks. Using a fitness function composed of multiple objectives - maximizing correctness, minimizing propagation delays and minimizing circuit size, we can generate correct combinational logic circuits resistant to non-invasive, side channel attacks. To the best of our knowledge, this is the first work to evolve a cryptography circuit using a genetic algorithm. We implement evolved circuits in hardware on a Xilinx Kintex-7 FPGA. Results reveal that the evolutionary algorithm can successfully generate correct, and side-channel resistant combinational circuits with negligible propagation delay.
This paper established a bi-level programming model for reactive power optimization, considering the feature of the grid voltage-reactive power control. The targets of upper-level and lower-level are minimization of grid loss and voltage deviation, respectively. According to the differences of two level, such as different variables, different solution space, primal-dual interior point algorithm is suggested to be used in upper-level, which takes continuous variables in account such as active power source and reactive power source. Upper-level model guaranteed the sufficient of the reactive power in power system. And then in lower-level the discrete variables such as taps are optimized by random forests algorithm (RFA), which regulate the voltage in a feasible range. Finally, a case study illustrated the speediness and robustness of this method.
A technical method regarding to the improvement of transmission capacity of an optical wireless orthogonal frequency division multiplexing (OFDM) link based on a visible light emitting diode (LED) is proposed in this paper. An original OFDM signal, which is encoded by various multilevel digital modulations such as quadrature phase shift keying (QPSK), and quadrature amplitude modulation (QAM), is converted into a sparse one and then compressed using an adaptive sampling with inverse discrete cosine transform, while its error-free reconstruction is implemented using a L1-minimization based on a Bayesian compressive sensing (CS). In case of QPSK symbols, the transmission capacity of the optical wireless OFDM link was increased from 31.12 Mb/s to 51.87 Mb/s at the compression ratio of 40 %, while It was improved from 62.5 Mb/s to 78.13 Mb/s at the compression ratio of 20 % under the 16-QAM symbols in the error free wireless transmission (forward error correction limit: bit error rate of 10-3).
Mutation analysis generates tests that distinguish variations, or mutants, of an artifact from the original. Mutation analysis is widely considered to be a powerful approach to testing, and hence is often used to evaluate other test criteria in terms of mutation score, which is the fraction of mutants that are killed by a test set. But mutation analysis is also known to provide large numbers of redundant mutants, and these mutants can inflate the mutation score. While mutation approaches broadly characterized as reduced mutation try to eliminate redundant mutants, the literature lacks a theoretical result that articulates just how many mutants are needed in any given situation. Hence, there is, at present, no way to characterize the contribution of, for example, a particular approach to reduced mutation with respect to any theoretical minimal set of mutants. This paper's contribution is to provide such a theoretical foundation for mutant set minimization. The central theoretical result of the paper shows how to minimize efficiently mutant sets with respect to a set of test cases. We evaluate our method with a widely-used benchmark.
The main focus of this work is the estimation of a complex valued signal assumed to have a sparse representation in an uncountable dictionary of signals. The dictionary elements are parameterized by a real-valued vector and the available observations are corrupted with an additive noise. By applying a linearization technique, the original model is recast as a constrained sparse perturbed model. The problem of the computation of the involved multiple parameters is addressed from a nonconvex optimization viewpoint. A cost function is defined including an arbitrary Lipschitz differentiable data fidelity term accounting for the noise statistics, and an ℓ0-like penalty. A proximal algorithm is then employed to solve the resulting nonconvex and nonsmooth minimization problem. Experimental results illustrate the good practical performance of the proposed approach when applied to 2D spectrum analysis.
We consider the estimation of a scalar state based on m measurements that can be potentially manipulated by an adversary. The attacker is assumed to have full knowledge about the true value of the state to be estimated and about the value of all the measurements. However, the attacker has limited resources and can only manipulate up to l of the m measurements. The problem is formulated as a minimax optimization, where one seeks to construct an optimal estimator that minimizes the “worst-case” expected cost against all possible manipulations by the attacker. We show that if the attacker can manipulate at least half the measurements (l ≥ m/2), then the optimal worst-case estimator should ignore all measurements and be based solely on the a-priori information. We provide the explicit form of the optimal estimator when the attacker can manipulate less than half the measurements (l <; m/2), which is based on (m2l) local estimators. We further prove that such an estimator can be reduced into simpler forms for two special cases, i.e., either the estimator is symmetric and monotone or m = 2l + 1. Finally we apply the proposed methodology in the case of Gaussian measurements.