Visible to the public Biblio

Filters: Keyword is search problems  [Clear All Filters]
2018-03-05
Medeiros, N., Ivaki, N., Costa, P., Vieira, M..  2017.  Software Metrics as Indicators of Security Vulnerabilities. 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE). :216–227.

Detecting software security vulnerabilities and distinguishing vulnerable from non-vulnerable code is anything but simple. Most of the time, vulnerabilities remain undisclosed until they are exposed, for instance, by an attack during the software operational phase. Software metrics are widely-used indicators of software quality, but the question is whether they can be used to distinguish vulnerable software units from the non-vulnerable ones during development. In this paper, we perform an exploratory study on software metrics, their interdependency, and their relation with security vulnerabilities. We aim at understanding: i) the correlation between software architectural characteristics, represented in the form of software metrics, and the number of vulnerabilities; and ii) which are the most informative and discriminative metrics that allow identifying vulnerable units of code. To achieve these goals, we use, respectively, correlation coefficients and heuristic search techniques. Our analysis is carried out on a dataset that includes software metrics and reported security vulnerabilities, exposed by security attacks, for all functions, classes, and files of five widely used projects. Results show: i) a strong correlation between several project-level metrics and the number of vulnerabilities, ii) the possibility of using a group of metrics, at both file and function levels, to distinguish vulnerable and non-vulnerable code with a high level of accuracy.

2018-02-02
Khari, M., Vaishali, Kumar, M..  2016.  Analysis of software security testing using metaheuristic search technique. 2016 3rd International Conference on Computing for Sustainable Global Development (INDIACom). :2147–2152.

Metaheuristic search technique is one of the advance approach when compared with traditional heuristic search technique. To select one option among different alternatives is not hard to get but really hard is give assurance that being cost effective. This hard problem is solved by the meta-heuristic search technique with the help of fitness function. Fitness function is a crucial metrics or a measure which helps in deciding which solution is optimal to choose from available set of test sets. This paper discusses hill climbing, simulated annealing, tabu search, genetic algorithm and particle swarm optimization techniques in detail explaining with the help of the algorithm. If metaheuristic search techniques combine some of the security testing methods, it would result in better searching technique as well as secure too. This paper primarily focusses on the metaheuristic search techniques.

2017-12-28
Liu, H., Ditzler, G..  2017.  A fast information-theoretic approximation of joint mutual information feature selection. 2017 International Joint Conference on Neural Networks (IJCNN). :4610–4617.

Feature selection is an important step in data analysis to address the curse of dimensionality. Such dimensionality reduction techniques are particularly important when if a classification is required and the model scales in polynomial time with the size of the feature (e.g., some applications include genomics, life sciences, cyber-security, etc.). Feature selection is the process of finding the minimum subset of features that allows for the maximum predictive power. Many of the state-of-the-art information-theoretic feature selection approaches use a greedy forward search; however, there are concerns with the search in regards to the efficiency and optimality. A unified framework was recently presented for information-theoretic feature selection that tied together many of the works in over the past twenty years. The work showed that joint mutual information maximization (JMI) is generally the best options; however, the complexity of greedy search for JMI scales quadratically and it is infeasible on high dimensional datasets. In this contribution, we propose a fast approximation of JMI based on information theory. Our approach takes advantage of decomposing the calculations within JMI to speed up a typical greedy search. We benchmarked the proposed approach against JMI on several UCI datasets, and we demonstrate that the proposed approach returns feature sets that are highly consistent with JMI, while decreasing the run time required to perform feature selection.

2017-12-20
Li, S., Wang, B..  2017.  A Method for Hybrid Bayesian Network Structure Learning from Massive Data Using MapReduce. 2017 ieee 3rd international conference on big data security on cloud (bigdatasecurity), ieee international conference on high performance and smart computing (hpsc), and ieee international conference on intelligent data and security (ids). :272–276.
Bayesian Network is the popular and important data mining model for representing uncertain knowledge. For large scale data it is often too costly to learn the accurate structure. To resolve this problem, much work has been done on migrating the structure learning algorithms to the MapReduce framework. In this paper, we introduce a distributed hybrid structure learning algorithm by combining the advantages of constraint-based and score-and-search-based algorithms. By reusing the intermediate results of MapReduce, the algorithm greatly simplified the computing work and got good results in both efficiency and accuracy.
2017-12-12
Saundry, A..  2017.  Institutional Repository Digital Object Metadata Enhancement and Re-Architecting. 2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL). :1–3.

We present work undertaken at our institutional repository to enhance metadata and re-organize digital objects according to new information architecture, in an effort to minimize administrative object management and processing, and improve object discovery and use. This work was partly motivated by the launch of a new discovery platform at our institution, which aggregates metadata and full text from our four open access repositories into a cohesive, consistent, and enhanced searching and browsing experience. The platform provides digital object identifier (DOI) assignment, metadata access via various formats, and an open metadata and full text application program interface (API) for researchers, amongst other features. Functionality of these platform features relies heavily on accurate object representation and metadata. This work facilitates and improves the discovery and engagement of the diverse digital objects available from our institution, so they can be used and analyzed in new, flexible, and innovative ways by a myriad of communities and disciplines.

2017-03-08
Santra, N., Biswas, S., Acharyya, S..  2015.  Neural modeling of Gene Regulatory Network using Firefly algorithm. 2015 IEEE UP Section Conference on Electrical Computer and Electronics (UPCON). :1–6.

Genes, proteins and other metabolites present in cellular environment exhibit a virtual network that represents the regulatory relationship among its constituents. This network is called Gene Regulatory Network (GRN). Computational reconstruction of GRN reveals the normal metabolic pathway as well as disease motifs. Availability of microarray gene expression data from normal and diseased tissues makes the job easier for computational biologists. Reconstruction of GRN is based on neural modeling. Here we have used discrete and continuous versions of a meta-heuristic algorithm named Firefly algorithm for structure and parameter learning of GRNs respectively. The discrete version for this problem is proposed by us and it has been applied to explore the discrete search space of GRN structure. To evaluate performance of the algorithm, we have used a widely used synthetic GRN data set. The algorithm shows an accuracy rate above 50% in finding GRN. The accuracy level of the performance of Firefly algorithm in structure and parameter optimization of GRN is promising.

Li, Xiao-Ke, Gu, Chun-Hua, Yang, Ze-Ping, Chang, Yao-Hui.  2015.  Virtual machine placement strategy based on discrete firefly algorithm in cloud environments. 2015 12th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). :61–66.

Because of poor performance of heuristic algorithms on virtual machine placement problem in cloud environments, a multi-objective constraint optimization model of virtual machine placement is presented, which taking energy consumption and resource wastage as the objective. We solve the model based on the proposed discrete firefly algorithm. It takes firefly's location as the placement result, brightness as the objective value. Its movement strategy makes darker fireflies move to brighter fireflies in solution space. The continuous position after movement is discretized by the proposed discrete strategy. In order to speed up the search for solution, the local search mechanism for the optimal solution is introduced. The experimental results in OpenStack cloud platform show that the proposed algorithm makes less energy consumption and resource wastage compared with other algorithms.

2015-05-06
Zhongming Jin, Cheng Li, Yue Lin, Deng Cai.  2014.  Density Sensitive Hashing. Cybernetics, IEEE Transactions on. 44:1362-1371.

Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.

2015-05-05
Luo Wenjun, Liu Guanli.  2014.  Asymmetrical quantum encryption protocol based on quantum search algorithm. Communications, China. 11:104-111.

Quantum cryptography and quantum search algorithm are considered as two important research topics in quantum information science. An asymmetrical quantum encryption protocol based on the properties of quantum one-way function and quantum search algorithm is proposed. Depending on the no-cloning theorem and trapdoor one-way functions of the public-key, the eavesdropper cannot extract any private-information from the public-keys and the ciphertext. Introducing key-generation randomized logarithm to improve security of our proposed protocol, i.e., one private-key corresponds to an exponential number of public-keys. Using unitary operations and the single photon measurement, secret messages can be directly sent from the sender to the receiver. The security of the proposed protocol is proved that it is information-theoretically secure. Furthermore, compared the symmetrical Quantum key distribution, the proposed protocol is not only efficient to reduce additional communication, but also easier to carry out in practice, because no entangled photons and complex operations are required.

2015-05-04
Durmus, Y., Langendoen, K..  2014.  Wifi authentication through social networks #x2014; A decentralized and context-aware approach. Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on. :532-538.

With the proliferation of WiFi-enabled devices, people expect to be able to use them everywhere, be it at work, while commuting, or when visiting friends. In the latter case, home owners are confronted with the burden of controlling the access to their WiFi router, and usually resort to simply sharing the password. Although convenient, this solution breaches basic security principles, and puts the burden on the friends who have to enter the password in each and every of their devices. The use of social networks, specifying the trust relations between people and devices, provides for a more secure and more friendly authentication mechanism. In this paper, we progress the state-of-the-art by abandoning the centralized solution to embed social networks in WiFi authentication; we introduce EAP-SocTLS, a decentralized approach for authentication and authorization of WiFi access points and other devices, exploiting the embedded trust relations. In particular, we address the (quadratic) search complexity when indirect trust relations, like the smartphone of a friend's kid, are involved. We show that the simple heuristic of limiting the search to friends and devices in physical proximity makes for a scalable solution. Our prototype implementation, which is based on WebID and EAP-TLS, uses WiFi probe requests to determine the pool of neighboring devices and was shown to reduce the search time from 1 minute for the naive policy down to 11 seconds in the case of granting access over an indirect friend.
 

2015-05-01
Lichtblau, B., Dittrich, A..  2014.  Probabilistic Breadth-First Search - A Method for Evaluation of Network-Wide Broadcast Protocols. New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on. :1-6.

In Wireless Mesh Networks (WMNs), Network-Wide Broadcasts (NWBs) are a fundamental operation, required by routing and other mechanisms that distribute information to all nodes in the network. However, due to the characteristics of wireless communication, NWBs are generally problematic. Optimizing them thus is a prime target when improving the overall performance and dependability of WMNs. Most existing optimizations neglect the real nature of WMNs and are based on simple graph models, which provide optimistic assumptions of NWB dissemination. On the other hand, models that fully consider the complex propagation characteristics of NWBs quickly become unsolvable due to their complexity. In this paper, we present the Monte Carlo method Probabilistic Breadth-First Search (PBFS) to approximate the reachability of NWB protocols. PBFS simulates individual NWBs on graphs with probabilistic edge weights, which reflect link qualities of individual wireless links in the WMN, and estimates reachability over a configurable number of simulated runs. This approach is not only more efficient than existing ones, but further provides additional information, such as the distribution of path lengths. Furthermore, it is easily extensible to NWB schemes other than flooding. The applicability of PBFS is validated both theoretically and empirically, in the latter by comparing reachability as calculated by PBFS and measured in a real-world WMN. Validation shows that PBFS quickly converges to the theoretically correct value and approximates the behavior of real-life testbeds very well. The feasibility of PBFS to support research on NWB optimizations or higher level protocols that employ NWBs is demonstrated in two use cases.

2015-04-30
Hao Wang, Haibin Ouyang, Liqun Gao, Wei Qin.  2014.  Opposition-based learning harmony search algorithm with mutation for solving global optimization problems. Control and Decision Conference (2014 CCDC), The 26th Chinese. :1090-1094.

This paper develops an opposition-based learning harmony search algorithm with mutation (OLHS-M) for solving global continuous optimization problems. The proposed method is different from the original harmony search (HS) in three aspects. Firstly, opposition-based learning technique is incorporated to the process of improvisation to enlarge the algorithm search space. Then, a new modified mutation strategy is instead of the original pitch adjustment operation of HS to further improve the search ability of HS. Effective self-adaptive strategy is presented to fine-tune the key control parameters (e.g. harmony memory consideration rate HMCR, and pitch adjustment rate PAR) to balance the local and global search in the evolution of the search process. Numerical results demonstrate that the proposed algorithm performs much better than the existing improved HS variants that reported in recent literature in terms of the solution quality and the stability.

Xiao-Bing Hu, Ming Wang, Leeson, M.S..  2014.  Calculating the complete pareto front for a special class of continuous multi-objective optimization problems. Evolutionary Computation (CEC), 2014 IEEE Congress on. :290-297.

Existing methods for multi-objective optimization usually provide only an approximation of a Pareto front, and there is little theoretical guarantee of finding the real Pareto front. This paper is concerned with the possibility of fully determining the true Pareto front for those continuous multi-objective optimization problems for which there are a finite number of local optima in terms of each single objective function and there is an effective method to find all such local optima. To this end, some generalized theoretical conditions are firstly given to guarantee a complete cover of the actual Pareto front for both discrete and continuous problems. Then based on such conditions, an effective search procedure inspired by the rising sea level phenomenon is proposed particularly for continuous problems of the concerned class. Even for general continuous problems to which not all local optima are available, the new method may still work well to approximate the true Pareto front. The good practicability of the proposed method is especially underpinned by multi-optima evolutionary algorithms. The advantages of the proposed method in terms of both solution quality and computational efficiency are illustrated by the simulation results.

Yexing Li, Xinye Cai, Zhun Fan, Qingfu Zhang.  2014.  An external archive guided multiobjective evolutionary approach based on decomposition for continuous optimization. Evolutionary Computation (CEC), 2014 IEEE Congress on. :1124-1130.

In this paper, we propose a decomposition based multiobjective evolutionary algorithm that extracts information from an external archive to guide the evolutionary search for continuous optimization problem. The proposed algorithm used a mechanism to identify the promising regions(subproblems) through learning information from the external archive to guide evolutionary search process. In order to demonstrate the performance of the algorithm, we conduct experiments to compare it with other decomposition based approaches. The results validate that our proposed algorithm is very competitive.

Biao Zhang, Huihui Yan, Junhua Duan, Liang, J.J., Hong-yan Sang, Quan-ke Pan.  2014.  An improved harmony search algorithm with dynamic control parameters for continuous optimization problems. Control and Decision Conference (2014 CCDC), The 26th Chinese. :966-971.

An improved harmony search algorithm is presented for solving continuous optimization problems in this paper. In the proposed algorithm, an elimination principle is developed for choosing from the harmony memory, so that the harmonies with better fitness will have more opportunities to be selected in generating new harmonies. Two key control parameters, pitch adjustment rate (PAR) and bandwidth distance (bw), are dynamically adjusted to favor exploration in the early stages and exploitation during the final stages of the search process with the different search spaces of the optimization problems. Numerical results of 12 benchmark problems show that the proposed algorithm performs more effectively than the existing HS variants in finding better solutions.