Biblio
Detecting software security vulnerabilities and distinguishing vulnerable from non-vulnerable code is anything but simple. Most of the time, vulnerabilities remain undisclosed until they are exposed, for instance, by an attack during the software operational phase. Software metrics are widely-used indicators of software quality, but the question is whether they can be used to distinguish vulnerable software units from the non-vulnerable ones during development. In this paper, we perform an exploratory study on software metrics, their interdependency, and their relation with security vulnerabilities. We aim at understanding: i) the correlation between software architectural characteristics, represented in the form of software metrics, and the number of vulnerabilities; and ii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code. To achieve these goals, we use, respectively, correlation coefficients and heuristic search techniques. Our analysis is carried out on a dataset that includes software metrics and reported security vulnerabilities, exposed by security attacks, for all functions, classes, and files of five widely used projects. Results show: i) a strong correlation between several project-level metrics and the number of vulnerabilities, ii) the possibility of using a group of metrics, at both file and function levels, to distinguish vulnerable and non-vulnerable code with a high level of accuracy.
Metaheuristic search technique is one of the advance approach when compared with traditional heuristic search technique. To select one option among different alternatives is not hard to get but really hard is give assurance that being cost effective. This hard problem is solved by the meta-heuristic search technique with the help of fitness function. Fitness function is a crucial metrics or a measure which helps in deciding which solution is optimal to choose from available set of test sets. This paper discusses hill climbing, simulated annealing, tabu search, genetic algorithm and particle swarm optimization techniques in detail explaining with the help of the algorithm. If metaheuristic search techniques combine some of the security testing methods, it would result in better searching technique as well as secure too. This paper primarily focusses on the metaheuristic search techniques.
Feature selection is an important step in data analysis to address the curse of dimensionality. Such dimensionality reduction techniques are particularly important when if a classification is required and the model scales in polynomial time with the size of the feature (e.g., some applications include genomics, life sciences, cyber-security, etc.). Feature selection is the process of finding the minimum subset of features that allows for the maximum predictive power. Many of the state-of-the-art information-theoretic feature selection approaches use a greedy forward search; however, there are concerns with the search in regards to the efficiency and optimality. A unified framework was recently presented for information-theoretic feature selection that tied together many of the works in over the past twenty years. The work showed that joint mutual information maximization (JMI) is generally the best options; however, the complexity of greedy search for JMI scales quadratically and it is infeasible on high dimensional datasets. In this contribution, we propose a fast approximation of JMI based on information theory. Our approach takes advantage of decomposing the calculations within JMI to speed up a typical greedy search. We benchmarked the proposed approach against JMI on several UCI datasets, and we demonstrate that the proposed approach returns feature sets that are highly consistent with JMI, while decreasing the run time required to perform feature selection.
We present work undertaken at our institutional repository to enhance metadata and re-organize digital objects according to new information architecture, in an effort to minimize administrative object management and processing, and improve object discovery and use. This work was partly motivated by the launch of a new discovery platform at our institution, which aggregates metadata and full text from our four open access repositories into a cohesive, consistent, and enhanced searching and browsing experience. The platform provides digital object identifier (DOI) assignment, metadata access via various formats, and an open metadata and full text application program interface (API) for researchers, amongst other features. Functionality of these platform features relies heavily on accurate object representation and metadata. This work facilitates and improves the discovery and engagement of the diverse digital objects available from our institution, so they can be used and analyzed in new, flexible, and innovative ways by a myriad of communities and disciplines.
Genes, proteins and other metabolites present in cellular environment exhibit a virtual network that represents the regulatory relationship among its constituents. This network is called Gene Regulatory Network (GRN). Computational reconstruction of GRN reveals the normal metabolic pathway as well as disease motifs. Availability of microarray gene expression data from normal and diseased tissues makes the job easier for computational biologists. Reconstruction of GRN is based on neural modeling. Here we have used discrete and continuous versions of a meta-heuristic algorithm named Firefly algorithm for structure and parameter learning of GRNs respectively. The discrete version for this problem is proposed by us and it has been applied to explore the discrete search space of GRN structure. To evaluate performance of the algorithm, we have used a widely used synthetic GRN data set. The algorithm shows an accuracy rate above 50% in finding GRN. The accuracy level of the performance of Firefly algorithm in structure and parameter optimization of GRN is promising.
Because of poor performance of heuristic algorithms on virtual machine placement problem in cloud environments, a multi-objective constraint optimization model of virtual machine placement is presented, which taking energy consumption and resource wastage as the objective. We solve the model based on the proposed discrete firefly algorithm. It takes firefly's location as the placement result, brightness as the objective value. Its movement strategy makes darker fireflies move to brighter fireflies in solution space. The continuous position after movement is discretized by the proposed discrete strategy. In order to speed up the search for solution, the local search mechanism for the optimal solution is introduced. The experimental results in OpenStack cloud platform show that the proposed algorithm makes less energy consumption and resource wastage compared with other algorithms.
Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.
Quantum cryptography and quantum search algorithm are considered as two important research topics in quantum information science. An asymmetrical quantum encryption protocol based on the properties of quantum one-way function and quantum search algorithm is proposed. Depending on the no-cloning theorem and trapdoor one-way functions of the public-key, the eavesdropper cannot extract any private-information from the public-keys and the ciphertext. Introducing key-generation randomized logarithm to improve security of our proposed protocol, i.e., one private-key corresponds to an exponential number of public-keys. Using unitary operations and the single photon measurement, secret messages can be directly sent from the sender to the receiver. The security of the proposed protocol is proved that it is information-theoretically secure. Furthermore, compared the symmetrical Quantum key distribution, the proposed protocol is not only efficient to reduce additional communication, but also easier to carry out in practice, because no entangled photons and complex operations are required.
With the proliferation of WiFi-enabled devices, people expect to be able to use them everywhere, be it at work, while commuting, or when visiting friends. In the latter case, home owners are confronted with the burden of controlling the access to their WiFi router, and usually resort to simply sharing the password. Although convenient, this solution breaches basic security principles, and puts the burden on the friends who have to enter the password in each and every of their devices. The use of social networks, specifying the trust relations between people and devices, provides for a more secure and more friendly authentication mechanism. In this paper, we progress the state-of-the-art by abandoning the centralized solution to embed social networks in WiFi authentication; we introduce EAP-SocTLS, a decentralized approach for authentication and authorization of WiFi access points and other devices, exploiting the embedded trust relations. In particular, we address the (quadratic) search complexity when indirect trust relations, like the smartphone of a friend's kid, are involved. We show that the simple heuristic of limiting the search to friends and devices in physical proximity makes for a scalable solution. Our prototype implementation, which is based on WebID and EAP-TLS, uses WiFi probe requests to determine the pool of neighboring devices and was shown to reduce the search time from 1 minute for the naive policy down to 11 seconds in the case of granting access over an indirect friend.
In Wireless Mesh Networks (WMNs), Network-Wide Broadcasts (NWBs) are a fundamental operation, required by routing and other mechanisms that distribute information to all nodes in the network. However, due to the characteristics of wireless communication, NWBs are generally problematic. Optimizing them thus is a prime target when improving the overall performance and dependability of WMNs. Most existing optimizations neglect the real nature of WMNs and are based on simple graph models, which provide optimistic assumptions of NWB dissemination. On the other hand, models that fully consider the complex propagation characteristics of NWBs quickly become unsolvable due to their complexity. In this paper, we present the Monte Carlo method Probabilistic Breadth-First Search (PBFS) to approximate the reachability of NWB protocols. PBFS simulates individual NWBs on graphs with probabilistic edge weights, which reflect link qualities of individual wireless links in the WMN, and estimates reachability over a configurable number of simulated runs. This approach is not only more efficient than existing ones, but further provides additional information, such as the distribution of path lengths. Furthermore, it is easily extensible to NWB schemes other than flooding. The applicability of PBFS is validated both theoretically and empirically, in the latter by comparing reachability as calculated by PBFS and measured in a real-world WMN. Validation shows that PBFS quickly converges to the theoretically correct value and approximates the behavior of real-life testbeds very well. The feasibility of PBFS to support research on NWB optimizations or higher level protocols that employ NWBs is demonstrated in two use cases.
This paper develops an opposition-based learning harmony search algorithm with mutation (OLHS-M) for solving global continuous optimization problems. The proposed method is different from the original harmony search (HS) in three aspects. Firstly, opposition-based learning technique is incorporated to the process of improvisation to enlarge the algorithm search space. Then, a new modified mutation strategy is instead of the original pitch adjustment operation of HS to further improve the search ability of HS. Effective self-adaptive strategy is presented to fine-tune the key control parameters (e.g. harmony memory consideration rate HMCR, and pitch adjustment rate PAR) to balance the local and global search in the evolution of the search process. Numerical results demonstrate that the proposed algorithm performs much better than the existing improved HS variants that reported in recent literature in terms of the solution quality and the stability.
Existing methods for multi-objective optimization usually provide only an approximation of a Pareto front, and there is little theoretical guarantee of finding the real Pareto front. This paper is concerned with the possibility of fully determining the true Pareto front for those continuous multi-objective optimization problems for which there are a finite number of local optima in terms of each single objective function and there is an effective method to find all such local optima. To this end, some generalized theoretical conditions are firstly given to guarantee a complete cover of the actual Pareto front for both discrete and continuous problems. Then based on such conditions, an effective search procedure inspired by the rising sea level phenomenon is proposed particularly for continuous problems of the concerned class. Even for general continuous problems to which not all local optima are available, the new method may still work well to approximate the true Pareto front. The good practicability of the proposed method is especially underpinned by multi-optima evolutionary algorithms. The advantages of the proposed method in terms of both solution quality and computational efficiency are illustrated by the simulation results.
In this paper, we propose a decomposition based multiobjective evolutionary algorithm that extracts information from an external archive to guide the evolutionary search for continuous optimization problem. The proposed algorithm used a mechanism to identify the promising regions(subproblems) through learning information from the external archive to guide evolutionary search process. In order to demonstrate the performance of the algorithm, we conduct experiments to compare it with other decomposition based approaches. The results validate that our proposed algorithm is very competitive.
An improved harmony search algorithm is presented for solving continuous optimization problems in this paper. In the proposed algorithm, an elimination principle is developed for choosing from the harmony memory, so that the harmonies with better fitness will have more opportunities to be selected in generating new harmonies. Two key control parameters, pitch adjustment rate (PAR) and bandwidth distance (bw), are dynamically adjusted to favor exploration in the early stages and exploitation during the final stages of the search process with the different search spaces of the optimization problems. Numerical results of 12 benchmark problems show that the proposed algorithm performs more effectively than the existing HS variants in finding better solutions.
- « first
- ‹ previous
- 1
- 2
- 3
- 4