Visible to the public Biblio

Found 7504 results

Filters: Keyword is Metrics  [Clear All Filters]
2017-12-28
Noureddine, M. A., Marturano, A., Keefe, K., Bashir, M., Sanders, W. H..  2017.  Accounting for the Human User in Predictive Security Models. 2017 IEEE 22nd Pacific Rim International Symposium on Dependable Computing (PRDC). :329–338.

Given the growing sophistication of cyber attacks, designing a perfectly secure system is not generally possible. Quantitative security metrics are thus needed to measure and compare the relative security of proposed security designs and policies. Since the investigation of security breaches has shown a strong impact of human errors, ignoring the human user in computing these metrics can lead to misleading results. Despite this, and although security researchers have long observed the impact of human behavior on system security, few improvements have been made in designing systems that are resilient to the uncertainties in how humans interact with a cyber system. In this work, we develop an approach for including models of user behavior, emanating from the fields of social sciences and psychology, in the modeling of systems intended to be secure. We then illustrate how one of these models, namely general deterrence theory, can be used to study the effectiveness of the password security requirements policy and the frequency of security audits in a typical organization. Finally, we discuss the many challenges that arise when adopting such a modeling approach, and then present our recommendations for future work.

Stanić, B., Afzal, W..  2017.  Process Metrics Are Not Bad Predictors of Fault Proneness. 2017 IEEE International Conference on Software Quality, Reliability and Security Companion (QRS-C). :493–499.

The correct prediction of faulty modules or classes has a number of advantages such as improving the quality of software and assigning capable development resources to fix such faults. There have been different kinds of fault/defect prediction models proposed in literature, but a great majority of them makes use of static code metrics as independent variables for making predictions. Recently, process metrics have gained a considerable attention as alternative metrics to use for making trust-worthy predictions. The objective of this paper is to investigate different combinations of static code and process metrics for evaluating fault prediction performance. We have used publicly available data sets, along with a frequently used classifier, Naive Bayes, to run our experiments. We have, both statistically and visually, analyzed our experimental results. The statistical analysis showed evidence against any significant difference in fault prediction performances for a variety of different combinations of metrics. This reinforced earlier research results that process metrics are as good as predictors of fault proneness as static code metrics. Furthermore, the visual inspection of box plots revealed that the best set of metrics for fault prediction is a mix of both static code and process metrics. We also presented evidence in support of some process metrics being more discriminating than others and thus making them as good predictors to use.

Boucher, A., Badri, M..  2017.  Predicting Fault-Prone Classes in Object-Oriented Software: An Adaptation of an Unsupervised Hybrid SOM Algorithm. 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :306–317.

Many fault-proneness prediction models have been proposed in literature to identify fault-prone code in software systems. Most of the approaches use fault data history and supervised learning algorithms to build these models. However, since fault data history is not always available, some approaches also suggest using semi-supervised or unsupervised fault-proneness prediction models. The HySOM model, proposed in literature, uses function-level source code metrics to predict fault-prone functions in software systems, without using any fault data. In this paper, we adapt the HySOM approach for object-oriented software systems to predict fault-prone code at class-level granularity using object-oriented source code metrics. This adaptation makes it easier to prioritize the efforts of the testing team as unit tests are often written for classes in object-oriented software systems, and not for methods. Our adaptation also generalizes one main element of the HySOM model, which is the calculation of the source code metrics threshold values. We conducted an empirical study using 12 public datasets. Results show that the adaptation of the HySOM model for class-level fault-proneness prediction improves the consistency and the performance of the model. We additionally compared the performance of the adapted model to supervised approaches based on the Naive Bayes Network, ANN and Random Forest algorithms.

Poon, W. N., Bennin, K. E., Huang, J., Phannachitta, P., Keung, J. W..  2017.  Cross-Project Defect Prediction Using a Credibility Theory Based Naive Bayes Classifier. 2017 IEEE International Conference on Software Quality, Reliability and Security (QRS). :434–441.

Several defect prediction models proposed are effective when historical datasets are available. Defect prediction becomes difficult when no historical data exist. Cross-project defect prediction (CPDP), which uses projects from other sources/companies to predict the defects in the target projects proposed in recent studies has shown promising results. However, the performance of most CPDP approaches are still beyond satisfactory mainly due to distribution mismatch between the source and target projects. In this study, a credibility theory based Naïve Bayes (CNB) classifier is proposed to establish a novel reweighting mechanism between the source projects and target projects so that the source data could simultaneously adapt to the target data distribution and retain its own pattern. Our experimental results show that the feasibility of the novel algorithm design and demonstrate the significant improvement in terms of the performance metrics considered achieved by CNB over other CPDP approaches.

Sultana, K. Z., Williams, B. J..  2017.  Evaluating micro patterns and software metrics in vulnerability prediction. 2017 6th International Workshop on Software Mining (SoftwareMining). :40–47.

Software security is an important aspect of ensuring software quality. Early detection of vulnerable code during development is essential for the developers to make cost and time effective software testing. The traditional software metrics are used for early detection of software vulnerability, but they are not directly related to code constructs and do not specify any particular granularity level. The goal of this study is to help developers evaluate software security using class-level traceable patterns called micro patterns to reduce security risks. The concept of micro patterns is similar to design patterns, but they can be automatically recognized and mined from source code. If micro patterns can better predict vulnerable classes compared to traditional software metrics, they can be used in developing a vulnerability prediction model. This study explores the performance of class-level patterns in vulnerability prediction and compares them with traditional class-level software metrics. We studied security vulnerabilities as reported for one major release of Apache Tomcat, Apache Camel and three stand-alone Java web applications. We used machine learning techniques for predicting vulnerabilities using micro patterns and class-level metrics as features. We found that micro patterns have higher recall in detecting vulnerable classes than the software metrics.

Sultana, K. Z..  2017.  Towards a software vulnerability prediction model using traceable code patterns and software metrics. 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE). :1022–1025.

Software security is an important aspect of ensuring software quality. The goal of this study is to help developers evaluate software security using traceable patterns and software metrics during development. The concept of traceable patterns is similar to design patterns but they can be automatically recognized and extracted from source code. If these patterns can better predict vulnerable code compared to traditional software metrics, they can be used in developing a vulnerability prediction model to classify code as vulnerable or not. By analyzing and comparing the performance of traceable patterns with metrics, we propose a vulnerability prediction model. This study explores the performance of some code patterns in vulnerability prediction and compares them with traditional software metrics. We use the findings to build an effective vulnerability prediction model. We evaluate security vulnerabilities reported for Apache Tomcat, Apache CXF and three stand-alone Java web applications. We use machine learning and statistical techniques for predicting vulnerabilities using traceable patterns and metrics as features. We found that patterns have a lower false negative rate and higher recall in detecting vulnerable code than the traditional software metrics.

Henretty, T., Baskaran, M., Ezick, J., Bruns-Smith, D., Simon, T. A..  2017.  A quantitative and qualitative analysis of tensor decompositions on spatiotemporal data. 2017 IEEE High Performance Extreme Computing Conference (HPEC). :1–7.

Summary form only given. Strong light-matter coupling has been recently successfully explored in the GHz and THz [1] range with on-chip platforms. New and intriguing quantum optical phenomena have been predicted in the ultrastrong coupling regime [2], when the coupling strength Ω becomes comparable to the unperturbed frequency of the system ω. We recently proposed a new experimental platform where we couple the inter-Landau level transition of an high-mobility 2DEG to the highly subwavelength photonic mode of an LC meta-atom [3] showing very large Ω/ωc = 0.87. Our system benefits from the collective enhancement of the light-matter coupling which comes from the scaling of the coupling Ω ∝ √n, were n is the number of optically active electrons. In our previous experiments [3] and in literature [4] this number varies from 104-103 electrons per meta-atom. We now engineer a new cavity, resonant at 290 GHz, with an extremely reduced effective mode surface Seff = 4 × 10-14 m2 (FE simulations, CST), yielding large field enhancements above 1500 and allowing to enter the few (\textbackslashtextless;100) electron regime. It consist of a complementary metasurface with two very sharp metallic tips separated by a 60 nm gap (Fig.1(a, b)) on top of a single triangular quantum well. THz-TDS transmission experiments as a function of the applied magnetic field reveal strong anticrossing of the cavity mode with linear cyclotron dispersion. Measurements for arrays of only 12 cavities are reported in Fig.1(c). On the top horizontal axis we report the number of electrons occupying the topmost Landau level as a function of the magnetic field. At the anticrossing field of B=0.73 T we measure approximately 60 electrons ultra strongly coupled (Ω/ω- \textbackslashtextbar\textbackslashtextbar

Danesh, W., Rahman, M..  2017.  Linear regression based multi-state logic decomposition approach for efficient hardware implementation. 2017 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH). :153–154.

Multi-state logic presents a promising avenue for more-than-Moore scaling, since efficient implementation of multi-valued logic (MVL) can significantly reduce switching and interconnection requirements and result in significant benefits compared to binary CMOS. So far, traditional approaches lag behind binary CMOS due to: (a) reliance on logic decomposition approaches [4][5][6] that result in many multi-valued minterms [4], complex polynomials [5], and decision diagrams [6], which are difficult to implement, and (b) emulation of multi-valued computation and communication through binary switches and medium that require data conversion, and large circuits. In this paper, we propose a fundamentally different approach for MVL decomposition, merging concepts from data science and nanoelectronics to tackle the problems, (a) First, we do linear regression on all inputs and outputs of a multivalued function, and find an expression that fits most input and output combinations. For unmatched combinations, we do successive regressions to find linear expressions. Next, using our novel visual pattern matching technique, we find conditions based on input and output conditions to select each expression. These expressions along with associated selection criteria ensure that for all possible inputs of a specific function, correct output can be reached. Our selection of regression model to find linear expressions, coefficients and conditions allow efficient hardware implementation. We discuss an approach for solving problem (b) and show an example of quaternary sum circuit. Our estimates show 65.6% saving of switching components compared with a 4-bit CMOS adder.

Rolinger, T. B., Simon, T. A., Krieger, C. D..  2017.  Performance challenges for heterogeneous distributed tensor decompositions. 2017 IEEE High Performance Extreme Computing Conference (HPEC). :1–7.

Tensor decompositions, which are factorizations of multi-dimensional arrays, are becoming increasingly important in large-scale data analytics. A popular tensor decomposition algorithm is Canonical Decomposition/Parallel Factorization using alternating least squares fitting (CP-ALS). Tensors that model real-world applications are often very large and sparse, driving the need for high performance implementations of decomposition algorithms, such as CP-ALS, that can take advantage of many types of compute resources. In this work we present ReFacTo, a heterogeneous distributed tensor decomposition implementation based on DeFacTo, an existing distributed memory approach to CP-ALS. DFacTo reduces the critical routine of CP-ALS to a series of sparse matrix-vector multiplications (SpMVs). ReFacTo leverages GPUs within a cluster via MPI to perform these SpMVs and uses OpenMP threads to parallelize other routines. We evaluate the performance of ReFacTo when using NVIDIA's GPU-based cuSPARSE library and compare it to an alternative implementation that uses Intel's CPU-based Math Kernel Library (MKL) for the SpMV. Furthermore, we provide a discussion of the performance challenges of heterogeneous distributed tensor decompositions based on the results we observed. We find that on up to 32 nodes, the SpMV of ReFacTo when using MKL is up to 6.8× faster than ReFacTo when using cuSPARSE.

Maslovskiy, A., Kolchigin, N., Legenkiy, M., Antyufeyeva, M..  2017.  Decomposition method for complex target RCS measuring. 2017 IEEE First Ukraine Conference on Electrical and Computer Engineering (UKRCON). :156–159.

In this paper a method of monostatic RCS measuring in real conditions for complex shaped objects is proposed. The basic idea of the method is to provide measuring in near field zone for different parts of the object (fragments) separately. This technique is titled "decomposition method". After such measurements all RCS data are summed and one can obtain the average RCS of investigated object. Such method is much more accessible in comparison with natural measurements in far field zone. In this paper the decomposition method is tested numerically. For this a model of complex shape object (tank T-90) is divided into the fragments for some direction of view. It is shown that the sum of RCS of the fragments is close to the full object RCS for corresponding direction.

El-Khamy, S. E., Korany, N. O., El-Sherif, M. H..  2017.  Correlation based highly secure image hiding in audio signals using wavelet decomposition and chaotic maps hopping for 5G multimedia communications. 2017 XXXIInd General Assembly and Scientific Symposium of the International Union of Radio Science (URSI GASS). :1–3.

Audio Steganography is the technique of hiding any secret information behind a cover audio file without impairing its quality. Data hiding in audio signals has various applications such as secret communications and concealing data that may influence the security and safety of governments and personnel and has possible important applications in 5G communication systems. This paper proposes an efficient secure steganography scheme based on the high correlation between successive audio signals. This is similar to the case of differential pulse coding modulation technique (DPCM) where encoding uses the redundancy in sample values to encode the signals with lower bit rate. Discrete Wavelet Transform (DWT) of audio samples is used to store hidden data in the least important coefficients of Haar transform. We use the benefit of the small differences between successive samples generated from encoding of the cover audio signal wavelet coefficients to hide image data without making a remarkable change in the cover audio signal. instead of changing of actual audio samples so this doesn't perceptually degrade the audio signal and provides higher hiding capacity with lower distortion. To further increase the security of the image hiding process, the image to be hidden is divided into blocks and the bits of each block are XORed with a different random sequence of logistic maps using hopping technique. The performance of the proposed algorithm has been estimated extensively against attacks and experimental results show that the proposed method achieves good robustness and imperceptibility.

Vu, Q. H., Ruta, D., Cen, L..  2017.  An ensemble model with hierarchical decomposition and aggregation for highly scalable and robust classification. 2017 Federated Conference on Computer Science and Information Systems (FedCSIS). :149–152.

This paper introduces an ensemble model that solves the binary classification problem by incorporating the basic Logistic Regression with the two recent advanced paradigms: extreme gradient boosted decision trees (xgboost) and deep learning. To obtain the best result when integrating sub-models, we introduce a solution to split and select sets of features for the sub-model training. In addition to the ensemble model, we propose a flexible robust and highly scalable new scheme for building a composite classifier that tries to simultaneously implement multiple layers of model decomposition and outputs aggregation to maximally reduce both bias and variance (spread) components of classification errors. We demonstrate the power of our ensemble model to solve the problem of predicting the outcome of Hearthstone, a turn-based computer game, based on game state information. Excellent predictive performance of our model has been acknowledged by the second place scored in the final ranking among 188 competing teams.

Nair, A. S., Ranganathan, P., Kaabouch, N..  2017.  A constrained topological decomposition method for the next-generation smart grid. 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT). :1–6.

The inherent heterogeneity in the uncertainty of variable generations (e.g., wind, solar, tidal and wave-power) in electric grid coupled with the dynamic nature of distributed architecture of sub-systems, and the need for information synchronization has made the problem of resource allocation and monitoring a tremendous challenge for the next-generation smart grid. Unfortunately, the deployment of distributed algorithms across micro grids have been overlooked in the electric grid sector. In particular, centralized methods for managing resources and data may not be sufficient to monitor a complex electric grid. This paper discusses a decentralized constrained decomposition using Linear Programming (LP) that optimizes the inter-area transfer across micro grids that reduces total generation cost for the grid. A test grid of IEEE 14-bus system is sectioned into two and three areas, and its effect on inter-transfer is analyzed.

Godfrey, L. B., Gashler, M. S..  2017.  Neural decomposition of time-series data. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC). :2796–2801.

We present a neural network technique for the analysis and extrapolation of time-series data called Neural Decomposition (ND). Units with a sinusoidal activation function are used to perform a Fourier-like decomposition of training samples into a sum of sinusoids, augmented by units with nonperiodic activation functions to capture linear trends and other nonperiodic components. We show how careful weight initialization can be combined with regularization to form a simple model that generalizes well. Our method generalizes effectively on the Mackey-Glass series, a dataset of unemployment rates as reported by the U.S. Department of Labor Statistics, a time-series of monthly international airline passengers, and an unevenly sampled time-series of oxygen isotope measurements from a cave in north India. We find that ND outperforms popular time-series forecasting techniques including LSTM, echo state networks, (S)ARIMA, and SVR with a radial basis function.

Kabi, B., Sahadevan, A. S., Pradhan, T..  2017.  An overflow free fixed-point eigenvalue decomposition algorithm: Case study of dimensionality reduction in hyperspectral images. 2017 Conference on Design and Architectures for Signal and Image Processing (DASIP). :1–9.

We consider the problem of enabling robust range estimation of eigenvalue decomposition (EVD) algorithm for a reliable fixed-point design. The simplicity of fixed-point circuitry has always been so tempting to implement EVD algorithms in fixed-point arithmetic. Working towards an effective fixed-point design, integer bit-width allocation is a significant step which has a crucial impact on accuracy and hardware efficiency. This paper investigates the shortcomings of the existing range estimation methods while deriving bounds for the variables of the EVD algorithm. In light of the circumstances, we introduce a range estimation approach based on vector and matrix norm properties together with a scaling procedure that maintains all the assets of an analytical method. The method could derive robust and tight bounds for the variables of EVD algorithm. The bounds derived using the proposed approach remain same for any input matrix and are also independent of the number of iterations or size of the problem. Some benchmark hyperspectral data sets have been used to evaluate the efficiency of the proposed technique. It was found that by the proposed range estimation approach, all the variables generated during the computation of Jacobi EVD is bounded within ±1.

Zamani, S., Nanjundaswamy, T., Rose, K..  2017.  Frequency domain singular value decomposition for efficient spatial audio coding. 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA). :126–130.

Advances in virtual reality have generated substantial interest in accurately reproducing and storing spatial audio in the higher order ambisonics (HOA) representation, given its rendering flexibility. Recent standardization for HOA compression adopted a framework wherein HOA data are decomposed into principal components that are then encoded by standard audio coding, i.e., frequency domain quantization and entropy coding to exploit psychoacoustic redundancy. A noted shortcoming of this approach is the occasional mismatch in principal components across blocks, and the resulting suboptimal transitions in the data fed to the audio coder. Instead, we propose a framework where singular value decomposition (SVD) is performed after transformation to the frequency domain via the modified discrete cosine transform (MDCT). This framework not only ensures smooth transition across blocks, but also enables frequency dependent SVD for better energy compaction. Moreover, we introduce a novel noise substitution technique to compensate for suppressed ambient energy in discarded higher order ambisonics channels, which significantly enhances the perceptual quality of the reconstructed HOA signal. Objective and subjective evaluation results provide evidence for the effectiveness of the proposed framework in terms of both higher compression gains and better perceptual quality, compared to existing methods.

Guo, J., Li, Z..  2017.  A Mean-Covariance Decomposition Modeling Method for Battery Capacity Prognostics. 2017 International Conference on Sensing, Diagnostics, Prognostics, and Control (SDPC). :549–556.

Lithium Ion batteries usually degrade to an unacceptable capacity level after hundreds or even thousands of cycles. The continuously observed capacity fade data over time and their internal structure can be informative for constructing capacity fade models. This paper applies a mean-covariance decomposition modeling method to analyze the capacity fade data. The proposed approach directly examines the variances and correlations in data of interest and express the correlation matrix in hyper-spherical coordinates using angles and trigonometric functions. The proposed method is applied to model and predict key batteries performance metrics using testing data under various testing conditions.

Shahbazi, M., Lee, J., Caldwell, D., Tsagarakis, N..  2017.  Inverse dynamics control of bimanual object manipulation using orthogonal decomposition: An analytic approach. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). :4791–4796.

In this paper, the well-known problem of codependence between inverse dynamics torque and contact force in bimanual object manipulation is addressed. The common contact constraint, namely rigid grasping, is exploited to decompose the set of dynamics equations into two orthogonally decoupled sets. Subsequently, the inverse dynamics control is formulated in a sub-manifold that is independent of the contact force, leading to analytically correct solutions that do not need to resort to common approximations for the aforementioned codependence problem. The contact force is also analytically computed and, therefore, can be optimally distributed using the torque redundancy. Relying on this prediction is most significant in situations where a force sensor at the end-effector is not present or is faulty. Even in the availability of sensory data, the predicted force may be used to correct typically noisy or delayed when filtered measurements, resulting in improved robustness. Simulation experiments on a planar bimanual manipulation model are presented.

2017-12-27
Slimane, N. B., Bouallegue, K., Machhout, M..  2017.  A novel image encryption scheme using chaos, hyper-chaos systems and the secure Hash algorithm SHA-1. 2017 International Conference on Control, Automation and Diagnosis (ICCAD). :141–145.

In this paper, we introduce a fast, secure and robust scheme for digital image encryption using chaotic system of Lorenz, 4D hyper-chaotic system and the Secure Hash Algorithm SHA-1. The encryption process consists of three layers: sub-vectors confusion and two-diffusion process. In the first layer we divide the plainimage into sub-vectors then, the position of each one is changed using the chaotic index sequence generated with chaotic attractor of Lorenz, while the diffusion layers use hyper-chaotic system to modify the values of pixels using an XOR operation. The results of security analysis such as statistical tests, differential attacks, key space, key sensitivity, entropy information and the running time are illustrated and compared to recent encryption schemes where the highest security level and speed are improved.

Radhika, K. R., Nalini, M. K..  2017.  Biometric Image Encryption Using DNA Sequences and Chaotic Systems. 2017 International Conference on Recent Advances in Electronics and Communication Technology (ICRAECT). :164–168.

Emerging communication technologies in distributed network systems require transfer of biometric digital images with high security. Network security is identified by the changes in system behavior which is either Dynamic or Deterministic. Performance computation is complex in dynamic system where cryptographic techniques are not highly suitable. Chaotic theory solves complex problems of nonlinear deterministic system. Several chaotic methods are combined to get hyper chaotic system for more security. Chaotic theory along with DNA sequence enhances security of biometric image encryption. Implementation proves the encrypted image is highly chaotic and resistant to various attacks.

Arivazhagan, S., Jebarani, W. S. L., Kalyani, S. V., Abinaya, A. Deiva.  2017.  Mixed chaotic maps based encryption for high crypto secrecy. 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN). :1–6.

In recent years, the chaos based cryptographic algorithms have enabled some new and efficient ways to develop secure image encryption techniques. In this paper, we propose a new approach for image encryption based on chaotic maps in order to meet the requirements of secure image encryption. The chaos based image encryption technique uses simple chaotic maps which are very sensitive to original conditions. Using mixed chaotic maps which works based on simple substitution and transposition techniques to encrypt the original image yields better performance with less computation complexity which in turn gives high crypto-secrecy. The initial conditions for the chaotic maps are assigned and using that seed only the receiver can decrypt the message. The results of the experimental, statistical analysis and key sensitivity tests show that the proposed image encryption scheme provides an efficient and secure way for image encryption.

Shyamala, N., Anusudha, K..  2017.  Reversible Chaotic Encryption Techniques For Images. 2017 Fourth International Conference on Signal Processing, Communication and Networking (ICSCN). :1–5.

Image encryption takes been used by armies and governments to help top-secret communication. Nowadays, this one is frequently used for guarding info among various civilian systems. To perform secure image encryption by means of various chaotic maps, in such system a legal party may perhaps decrypt the image with the support of encryption key. This reversible chaotic encryption technique makes use of Arnold's cat map, in which pixel shuffling offers mystifying the image pixels based on the number of iterations decided by the authorized image owner. This is followed by other chaotic encryption techniques such as Logistic map and Tent map, which ensures secure image encryption. The simulation result shows the planned system achieves better NPCR, UACI, MSE and PSNR respectively.

Li, L., Abd-El-Atty, B., El-Latif, A. A. A., Ghoneim, A..  2017.  Quantum color image encryption based on multiple discrete chaotic systems. 2017 Federated Conference on Computer Science and Information Systems (FedCSIS). :555–559.

In this paper, a novel quantum encryption algorithm for color image is proposed based on multiple discrete chaotic systems. The proposed quantum image encryption algorithm utilize the quantum controlled-NOT image generated by chaotic logistic map, asymmetric tent map and logistic Chebyshev map to control the XOR operation in the encryption process. Experiment results and analysis show that the proposed algorithm has high efficiency and security against differential and statistical attacks.

Hamad, N., Rahman, M., Islam, S..  2017.  Novel remote authentication protocol using heart-signals with chaos cryptography. 2017 International Conference on Informatics, Health Technology (ICIHT). :1–7.

Entity authentication is one of the fundamental information security properties for secure transactions and communications. The combination of biometrics with cryptography is an emerging topic for authentication protocol design. Among the existing biometrics (e.g., fingerprint, face, iris, voice, heart), the heart-signal contains liveness property of biometric samples. In this paper, a remote entity authentication protocol has been proposed based on the randomness of heart biometrics combined with chaos cryptography. To this end, initial keys are generated for chaotic logistic maps based on the heart-signal. The authentication parameters are generated from the initial keys that can be used for claimants and verifiers to authenticate and verify each other, respectively. In this proposed technique, as each session of communication is different from others, therefore many session-oriented attacks are prevented. Experiments have been conducted on sample heart-signal for remote authentication. The results show that the randomness property of the heart-signal can help to implement one of the famous secure encryption, namely one-time pad encryption.

Boyacı, O., Tantuğ, A. C..  2017.  A random number generation method based on discrete time chaotic maps. 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS). :1212–1215.

In this paper a random number generation method based on a piecewise linear one dimensional (PL1D) discrete time chaotic maps is proposed for applications in cryptography and steganography. Appropriate parameters are determined by examining the distribution of underlying chaotic signal and random number generator (RNG) is numerically verified by four fundamental statistical test of FIPS 140-2. Proposed design is practically realized on the field programmable analog and digital arrays (FPAA-FPGA). Finally it is experimentally verified that the presented RNG fulfills the NIST 800-22 randomness test without post processing.