Biblio
Exclusive-or (XOR) operations are common in cryptographic protocols, in particular in RFID protocols and electronic payment protocols. Although there are numerous applications, due to the inherent complexity of faithful models of XOR, there is only limited tool support for the verification of cryptographic protocols using XOR. The Tamarin prover is a state-of-the-art verification tool for cryptographic protocols in the symbolic model. In this paper, we improve the underlying theory and the tool to deal with an equational theory modeling XOR operations. The XOR theory can be freely combined with all equational theories previously supported, including user-defined equational theories. This makes Tamarin the first tool to support simultaneously this large set of equational theories, protocols with global mutable state, an unbounded number of sessions, and complex security properties including observational equivalence. We demonstrate the effectiveness of our approach by analyzing several protocols that rely on XOR, in particular multiple RFID-protocols, where we can identify attacks as well as provide proofs.
Location determination in the indoor areas as well as in open areas is important for many applications. But location determination in the indoor areas is a very difficult process compared to open areas. The Global Positioning System (GPS) signals used for position detection is not effective in the indoor areas. Wi-Fi signals are a widely used method for localization detection in the indoor area. In the indoor areas, localization can be used for many different purposes, such as intelligent home systems, locations of people, locations of products in the depot. In this study, it was tried to determine localization for with the classification method for 4 different areas by using Wi-Fi signal values obtained from different routers for indoor location determination. Linear discriminant analysis (LDA) classification was used for classification. In the test using 10k fold cross-validation, 97.2% accuracy value was calculated.
Trust Relationships have shown great potential to improve recommendation quality, especially for cold start and sparse users. Since each user trust their friends in different degrees, there are numbers of works been proposed to take Trust Strength into account for recommender systems. However, these methods ignore the information of trust directions between users. In this paper, we propose a novel method to adaptively learn directive trust strength to improve trust-aware recommender systems. Advancing previous works, we propose to establish direction of trust strength by modeling the implicit relationships between users with roles of trusters and trustees. Specially, under new trust strength with directions, how to compute the directive trust strength is becoming a new challenge. Therefore, we present a novel method to adaptively learn directive trust strengths in a unified framework by enforcing the trust strength into range of [0, 1] through a mapping function. Our experiments on Epinions and Ciao datasets demonstrate that the proposed algorithm can effectively outperform several state-of-art algorithms on both MAE and RMSE metrics.
Cloud storage brokerage systems abstract cloud storage complexities by mediating technical and business relationships between cloud stakeholders, while providing value-added services. This however raises security challenges pertaining to the integration of disparate components with sometimes conflicting security policies and architectural complexities. Assessing the security risks of these challenges is therefore important for Cloud Storage Brokers (CSBs). In this paper, we present a threat modeling schema to analyze and identify threats and risks in cloud brokerage brokerage systems. Our threat modeling schema works by generating attack trees, attack graphs, and data flow diagrams that represent the interconnections between identified security risks. Our proof-of-concept implementation employs the Common Configuration Scoring System (CCSS) to support the threat modeling schema, since current schemes lack sufficient security metrics which are imperatives for comprehensive risk assessments. We demonstrate the efficiency of our proposal by devising CCSS base scores for two attacks commonly launched against cloud storage systems: Cloud sStorage Enumeration Attack and Cloud Storage Exploitation Attack. These metrics are then combined with CVSS based metrics to assign probabilities in an Attack Tree. Thus, we show the possibility combining CVSS and CCSS for comprehensive threat modeling, and also show that our schemas can be used to improve cloud security.
From the last few years, security in wireless sensor network (WSN) is essential because WSN application uses important information sharing between the nodes. There are large number of issues raised related to security due to open deployment of network. The attackers disturb the security system by attacking the different protocol layers in WSN. The standard AODV routing protocol faces security issues when the route discovery process takes place. The data should be transmitted in a secure path to the destination. Therefore, to support the process we have proposed a trust based intrusion detection system (NL-IDS) for network layer in WSN to detect the Black hole attackers in the network. The sensor node trust is calculated as per the deviation of key factor at the network layer based on the Black hole attack. We use the watchdog technique where a sensor node continuously monitors the neighbor node by calculating a periodic trust value. Finally, the overall trust value of the sensor node is evaluated by the gathered values of trust metrics of the network layer (past and previous trust values). This NL-IDS scheme is efficient to identify the malicious node with respect to Black hole attack at the network layer. To analyze the performance of NL-IDS, we have simulated the model in MATLAB R2015a, and the result shows that NL-IDS is better than Wang et al. [11] as compare of detection accuracy and false alarm rate.
The Machine Type Communication Devices (MTCDs) are usually based on Internet Protocol (IP), which can cause billions of connected objects to be part of the Internet. The enormous amount of data coming from these devices are quite heterogeneous in nature, which can lead to security issues, such as injection attacks, ballot stuffing, and bad mouthing. Consequently, this work considers machine learning trust evaluation as an effective and accurate option for solving the issues associate with security threats. In this paper, a comparative analysis is carried out with five different machine learning approaches: Naive Bayes (NB), Decision Tree (DT), Linear and Radial Support Vector Machine (SVM), KNearest Neighbor (KNN), and Random Forest (RF). As a critical element of the research, the recommendations consider different Machine-to-Machine (M2M) communication nodes with regard to their ability to identify malicious and honest information. To validate the performances of these models, two trust computation measures were used: Receiver Operating Characteristics (ROCs), Precision and Recall. The malicious data was formulated in Matlab. A scenario was created where 50% of the information were modified to be malicious. The malicious nodes were varied in the ranges of 10%, 20%, 30%, 40%, and the results were carefully analyzed.
Physical Unclonable Functions (PUFs) have been designed for many security applications such as identification, authentication of devices and key generation, especially for lightweight electronics. Traditional approaches to enhancing security, such as hash functions, may be expensive and resource dependent. However, modelling attacks using machine learning (ML) show the vulnerability of most PUFs. In this paper, a combination of a 32-bit current mirror and 16-bit arbiter PUFs in 65nm CMOS technology is proposed to improve resilience against modelling attacks. Both PUFs are vulnerable to machine learning attacks and we reduce the output prediction rate from 99.2% and 98.8% individually, to 60%.
Hardware information flow analysis detects security vulnerabilities resulting from unintended design flaws, timing channels, and hardware Trojans. These information flow models are typically generated in a general way, which includes a significant amount of redundancy that is irrelevant to the specified security properties. In this work, we propose a property specific approach for information flow security. We create information flow models tailored to the properties to be verified by performing a property specific search to identify security critical paths. This helps find suspicious signals that require closer inspection and quickly eliminates portions of the design that are free of security violations. Our property specific trimming technique reduces the complexity of the security model; this accelerates security verification and restricts potential security violations to a smaller region which helps quickly pinpoint hardware security vulnerabilities.
The purpose of this research is to propose a new mathematical model, designed to evaluate the security of cryptosystems. This model is a mixture of ideas from two basic mathematical theories, information theory and game theory. The role of information theory is assigning the model with security criteria of the cryptosystems. The role of game theory was to produce the value of the game which is representing the outcome of these criteria, which finally refers to cryptosystem's security. The proposed model support an accurate and mathematical way to evaluate the security of cryptosystems by unifying the criteria resulted from information theory and produce a unique reasonable value.
We present the IT solution for remote modeling of cryptographic protocols and other cryptographic primitives and a number of education-oriented capabilities based on them. These capabilities are provided at the Department of Mathematical Modeling using the MPEI algebraic processor, and allow remote participants to create automata models of cryptographic protocols, use and manage them in the modeling process. Particular attention is paid to the IT solution for modeling of the private communication and key distribution using the processor combined with the Kerberos protocol. This allows simulation and studying of key distribution protocols functionality on remote computers via the Internet. The importance of studying cryptographic primitives for future IT specialists is emphasized.
Blum-Blum-Shub (BBS) is a less complex pseudorandom number generator (PRNG) that requires very large modulus and a squaring operation for the generation of each bit, which makes it computationally heavy and slow. On the other hand, the concept of elliptic curve (EC) point operations has been extended to PRNGs that prove to have good randomness properties and reduced latency, but exhibit dependence on the secrecy of point P. Given these pros and cons, this paper proposes a new BBS-ECPRNG approach such that the modulus is the product of two elliptic curve points, both primes of length, and the number of bits extracted per iteration is by binary fraction. We evaluate the algorithm performance by generating 1000 distinct sequences of 106bits each. The results were analyzed based on the overall performance of the sequences using the NIST standard statistical test suite. The average performance of the sequences was observed to be above the minimum confidence level of 99.7 percent and successfully passed all the statistical properties of randomness tests.
To ensure reliable and predictable service in the electrical grid it is important to gauge the level of trust present within critical components and substations. Although trust throughout a smart grid is temporal and dynamically varies according to measured states, it is possible to accurately formulate communications and service level strategies based on such trust measurements. Utilizing an effective set of machine learning and statistical methods, it is shown that establishment of trust levels between substations using behavioral pattern analysis is possible. It is also shown that the establishment of such trust can facilitate simple secure communications routing between substations.
Moving Target Defence (MTD) has been recently proposed and is an emerging proactive approach which provides an asynchronous defensive strategies. Unlike traditional security solutions that focused on removing vulnerabilities, MTD makes a system dynamic and unpredictable by continuously changing attack surface to confuse attackers. MTD can be utilized in cloud computing to address the cloud's security-related problems. There are many literature proposing MTD methods in various contexts, but it still lacks approaches to evaluate the effectiveness of proposed MTD method. In this paper, we proposed a combination of Shuffle and Diversity MTD techniques and investigate on the effects of deploying these techniques from two perspectives lying on two groups of security metrics (i) system risk: which is the cloud providers' perspective and (ii) attack cost and return on attack: which are attacker's point of view. Moreover, we utilize a scalable Graphical Security Model (GSM) to enhance the security analysis complexity. Finally, we show that combining MTD techniques can improve both aforementioned two groups of security metrics while individual technique cannot.
In spite of numerous advantages of biometrics-based personal authentication systems over traditional security systems based on token or knowledge, they are vulnerable to attacks that can decrease their security considerably. In this paper, we propose a new hardware solution to protect biometric templates such as fingerprint. The proposed scheme is based on chaotic N × N grid multi-scroll system and it is implemented on Xilinx FPGA. The hardware implementation is achieved by applying numerical solution methods in our study, we use EM (Euler Method). Simulation and experimental results show that the proposed scheme allows a low cost image encryption for embedded systems while still providing a good trade-off between performance and hardware resources. Indeed, security analysis performed to the our scheme, is strong against known different attacks, such as: brute force, statistical, differential, and entropy. Therefore, the proposed chaos-based multiscroll encryption algorithm is suitable for use in securing embedded biometric systems.
Multipath propagation of radio waves negatively affects to the performance of telecommunications and radio navigation systems. When performing time and frequency synchronization tasks of spatially separated standards, the multipath signal propagation aggravates the probability of a correct synchronization and introduces an error. The presence of a multipath signal reduces the signal-to-noise ratio in the received signal, which in turn causes an increase in the synchronization error. If the time delay of the additional beam (s) is less than the useful signal duration, the reception of the useful signal is further complicated by the presence of a partially correlated interference, the level and correlation degree of which increases with decreasing time delay of the interference signals. The article considers with the method of multi-path interference compensation in a multi-position (telecommunication or radio navigation system) or a time and frequency synchronization system for the case if at least one of the receiving positions has no noise signal or does not exceed the permissible level. The essence of the method is that the interference-free useful signal is transmitted to other points in order to pick out the interference component from the signal / noise mix. As a result, an interference-free signal is used for further processing. The mathematical models of multipath interference suppressors in the temporal and in the frequency domain are presented in the article. Compared to time processing, processing in the frequency domain reduces computational costs. The operation of the suppressor in the time domain has been verified experimentally.
The recently developed deep belief network (DBN) has been shown to be an effective methodology for solving time series forecasting problems. However, the performance of DBN is seriously depended on the reasonable setting of hyperparameters. At present, random search, grid search and Bayesian optimization are the most common methods of hyperparameters optimization. As an alternative, a state-of-the-art derivative-free optimizer-negative correlation search (NCS) is adopted in this paper to decide the sizes of DBN and learning rates during the training processes. A comparative analysis is performed between the proposed method and other popular techniques in the time series forecasting experiment based on two types of time series datasets. Experiment results statistically affirm the efficiency of the proposed model to obtain better prediction results compared with conventional neural network models.
Accurate model is very important for the control of nonlinear system. The traditional identification method based on shallow BP network is easy to fall into local optimal solution. In this paper, a modeling method for nonlinear system based on improved Deep Belief Network (DBN) is proposed. Continuous Restricted Boltzmann Machine (CRBM) is used as the first layer of the DBN, so that the network can more effectively deal with the actual data collected from the real systems. Then, the unsupervised training and supervised tuning were combine to improve the accuracy of identification. The simulation results show that the proposed method has a higher identification accuracy. Finally, this improved algorithm is applied to identification of diameter model of silicon single crystal and the simulation results prove its excellent ability of parameters identification.
Role-Based Access Control (RBAC) is often used in web applications to restrict operations and protect security sensitive information and resources. Web applications regularly undergo maintenance and evolution and their security may be affected by source code changes between releases. To prevent security regression and vulnerabilities, developers have to take re-validation actions before deploying new releases. This may become a significant undertaking, especially when quick and repeated releases are sought. We define protection-impacting changes as those changed statements during evolution that alter privilege protection of some code. We propose an automated method that identifies protection-impacting changes within all changed statements between two versions. The proposed approach compares statically computed security protection models and repository information corresponding to different releases of a system to identify protection-impacting changes. Results of experiments present the occurrence of protection-impacting changes over 210 release pairs of WordPress, a PHP content management web application. First, we show that only 41% of the release pairs present protection-impacting changes. Second, for these affected release pairs, protection-impacting changes can be identified and represent a median of 47.00 lines of code, that is 27.41% of the total changed lines of code. Over all investigated releases in WordPress, protection-impacting changes amounted to 10.89% of changed lines of code. Conversely, an average of about 89% of changed source code have no impact on RBAC security and thus need no re-validation nor investigation. The proposed method reduces the amount of candidate causes of protection changes that developers need to investigate. This information could help developers re-validate application security, identify causes of negative security changes, and perform repairs in a more effective way.
Resilient control systems should efficiently restore control into physical systems not only after the sabotage of themselves, but also after breaking physical systems. To enhance resilience of control systems, given an originally minimal-input controlled linear-time invariant(LTI) physical system, we address the problem of efficient control recovery into it after removing a known system vertex by finding the minimum number of inputs. According to the minimum input theorem, given a digraph embedded into LTI model and involving a precomputed maximum matching, this problem is modeled into recovering controllability of it after removing a known network vertex. Then, we recover controllability of the residual network by efficiently finding a maximum matching rather than recomputation. As a result, except for precomputing a maximum matching and the following removed vertex, the worst-case execution time of control recovery into the residual LTI physical system is linear.