Biblio
Along with the rapid development of hardware security techniques, the revolutionary growth of countermeasures or attacking methods developed by intelligent and adaptive adversaries have significantly complicated the ability to create secure hardware systems. Thus, there is a critical need to (re)evaluate existing or new hardware security techniques against these state-of-the-art attacking methods. With this in mind, this paper presents a novel framework for incorporating active learning techniques into hardware security field. We demonstrate that active learning can significantly improve the learning efficiency of physical unclonable function (PUF) modeling attack, which samples the least confident and the most informative challenge-response pair (CRP) for training in each iteration. For example, our experimental results show that in order to obtain a prediction error below 4%, 2790 CRPs are required in passive learning, while only 811 CRPs are required in active learning. The sampling strategies and detailed applications of PUF modeling attack under various environmental conditions are also discussed. When the environment is very noisy, active learning may sample a large number of mislabeled CRPs and hence result in high prediction error. We present two methods to mitigate the contradiction between informative and noisy CRPs.
Nowadays, trust and reputation models are used to build a wide range of trust-based security mechanisms and trust-based service management applications on the Internet of Things (IoT). Considering trust as a single unit can result in missing important and significant factors. We split trust into its building-blocks, then we sort and assign weight to these building-blocks (trust metrics) on the basis of its priorities for the transaction context of a particular goal. To perform these processes, we consider trust as a multi-criteria decision-making problem, where a set of trust worthiness metrics represent the decision criteria. We introduce Entropy-based fuzzy analytic hierarchy process (EFAHP) as a trust model for selecting a trustworthy service provider, since the sense of decision making regarding multi-metrics trust is structural. EFAHP gives 1) fuzziness, which fits the vagueness, uncertainty, and subjectivity of trust attributes; 2) AHP, which is a systematic way for making decisions in complex multi-criteria decision making; and 3) entropy concept, which is utilized to calculate the aggregate weights for each service provider. We present a numerical illustration in trust-based Service Oriented Architecture in the IoT (SOA-IoT) to demonstrate the service provider selection using the EFAHP Model in assessing and aggregating the trust scores.
With wide applications like surveillance and imaging, securing underwater acoustic Mobile Ad-hoc NETworks (MANET) becomes a double-edged sword for oceanographic operations. Underwater acoustic MANET inherits vulnerabilities from 802.11-based MANET which renders traditional cryptographic approaches defenseless. A Trust Management Framework (TMF), allowing maintained confidence among participating nodes with metrics built from their communication activities, promises secure, efficient and reliable access to terrestrial MANETs. TMF cannot be directly applied to the underwater environment due to marine characteristics that make it difficult to differentiate natural turbulence from intentional misbehavior. This work proposes a trust model to defend underwater acoustic MANETs against attacks using a machine learning method with carefully chosen communication metrics, and a cloud model to address the uncertainty of trust in harsh underwater environments. By integrating the trust framework of communication with the cloud model to combat two kinds of uncertainties: fuzziness and randomness, trust management is greatly improved for underwater acoustic MANETs.
The paper outlines the concept of the Digital economy, defines the role and types of intellectual resources in the context of digitalization of the economy, reviews existing approaches and methods to intellectual property valuation and analyzes drawbacks of quantitative evaluation of intellectual resources (based intellectual property valuation) related to: uncertainty, noisy data, heterogeneity of resources, nonformalizability, lack of reliable tools for measuring the parameters of intellectual resources and non-stationary development of intellectual resources. The results of the study offer the ways of further development of methods for quantitative evaluation of intellectual resources (inter alia aimed at their capitalization).
A covert communication system under block fading channels is considered, where users experience uncertainty about their channel knowledge. The transmitter seeks to hide the covert communication to a private user by exploiting a legitimate public communication link, while the warden tries to detect this covert communication by using a radiometer. We derive the exact expression for the radiometer's optimal threshold, which determines the performance limit of the warden's detector. Furthermore, for given transmission outage constraints, the achievable rates for legitimate and covert users are analyzed, while maintaining a specific level of covertness. Our numerical results illustrate how the achievable performance is affected by the channel uncertainty and required level of covertness.
Deregulated electricity markets rely on a two-settlement system consisting of day-ahead and real-time markets, across which electricity price is volatile. In such markets, locational marginal pricing is widely adopted to set electricity prices and manage transmission congestion. Locational marginal prices are vulnerable to measurement errors. Existing studies show that if the adversaries are omniscient, they can design profitable attack strategies without being detected by the residue-based bad data detectors. This paper focuses on a more realistic setting, in which the attackers have only partial and imperfect information due to their limited resources and restricted physical access to the grid. Specifically, the attackers are assumed to have uncertainties about the state of the grid, and the uncertainties are modeled stochastically. Based on this model, this paper offers a framework for characterizing the optimal stochastic guarantees for the effectiveness of the attacks and the associated pricing impacts.
This study proposes to apply an efficient formulation to solve the stochastic security-constrained generation capacity expansion planning (GCEP) problem using an improved method to directly compute the generalized generation distribution factors (GGDF) and the line outage distribution factors (LODF) in order to model the pre- and the post-contingency constraints based on the only application of the partial transmission distribution factors (PTDF). The classical DC-based formulation has been reformulated in order to include the security criteria solving both pre- and post-contingency constraints simultaneously. The methodology also takes into account the load uncertainty in the optimization problem using a two-stage multi-period model, and a clustering technique is used as well to reduce load scenarios (stochastic problem). The main advantage of this methodology is the feasibility to quickly compute the LODF especially with multiple-line outages (N-m). This idea could speed up contingency analyses and improve significantly the security-constrained analyses applied to GCEP problems. It is worth to mentioning that this approach is carried out without sacrificing optimality.
Ransomware techniques have evolved over time with the most resilient attacks making data recovery practically impossible. This has driven countermeasures to shift towards recovery against prevention but in this paper, we model ransomware attacks from an infection vector point of view. We follow the basic infection chain of crypto ransomware and use Bayesian network statistics to infer some of the most common ransomware infection vectors. We also employ the use of attack and sensor nodes to capture uncertainty in the Bayesian network.
Exploratory evaluation is an effective way to analyze and improve the security of information system. The information system structure model for security protection capability is set up in view of the exploratory evaluation requirements of security protection capability, and the requirements of agility, traceability and interpretation for exploratory evaluation are obtained by analyzing the relationship between information system, protective equipment and protection policy. Aimed at the exploratory evaluation description problem of security protection capability, the exploratory evaluation problem and exploratory evaluation process are described based on the Granular Computing theory, and a general mathematical description is established. Analysis shows that the standardized description established meets the exploratory evaluation requirements, and it can provide an analysis basis and description specification for exploratory evaluation of information system security protection capability.
In this study, it is proposed to carry out an efficient formulation in order to figure out the stochastic security-constrained generation capacity expansion planning (SC-GCEP) problem. The main idea is related to directly compute the line outage distribution factors (LODF) which could be applied to model the N - m post-contingency analysis. In addition, the post-contingency power flows are modeled based on the LODF and the partial transmission distribution factors (PTDF). The post-contingency constraints have been reformulated using linear distribution factors (PTDF and LODF) so that both the pre- and post-contingency constraints are modeled simultaneously in the SC-GCEP problem using these factors. In the stochastic formulation, the load uncertainty is incorporated employing a two-stage multi-period framework, and a K - means clustering technique is implemented to decrease the number of load scenarios. The main advantage of this methodology is the feasibility to quickly compute the post-contingency factors especially with multiple-line outages (N - m). This concept would improve the security-constraint analysis modeling quickly the outage of m transmission lines in the stochastic SC-GCEP problem. It is carried out several experiments using two electrical power systems in order to validate the performance of the proposed formulation.
We propose an approach to enforce security in disruption- and delay-tolerant networks (DTNs) where long delays, high packet drop rates, unavailability of central trusted entity etc. make traditional approaches unfeasible. We use trust model based on subjective logic to continuously evaluate trustworthiness of security credentials issued in distributed manner by network participants to deal with absence of centralised trusted authorities.
Cloud computing has become a part of people's lives. However, there are many unresolved problems with security of this technology. According to the assessment of international experts in the field of security, there are risks in the appearance of cloud collusion in uncertain conditions. To mitigate this type of uncertainty, and minimize data redundancy of encryption together with harms caused by cloud collusion, modified threshold Asmuth-Bloom and weighted Mignotte secret sharing schemes are used. We show that if the villains do know the secret parts, and/or do not know the secret key, they cannot recuperate the secret. If the attackers do not know the required number of secret parts but know the secret key, the probability that they obtain the secret depends the size of the machine word in bits that is less than 1/2(1-1). We demonstrate that the proposed scheme ensures security under several types of attacks. We propose four approaches to select weights for secret sharing schemes to optimize the system behavior based on data access speed: pessimistic, balanced, and optimistic, and on speed per price ratio. We use the approximate method to improve the detection, localization and error correction accuracy under cloud parameters uncertainty.
Interval uncertainty can cause uncontrollable variations in the objective and constraint values, which could seriously deteriorate the performance or even change the feasibility of the optimal solutions. Robust optimization is to obtain solutions that are optimal and minimally sensitive to uncertainty. In this paper, a sequential multi-objective robust optimization (MORO) approach based on support vector machines (SVM) is proposed. Firstly, a sequential optimization structure is adopted to ease the computational burden. Secondly, SVM is used to construct a classification model to classify design alternatives into feasible or infeasible. The proposed approach is tested on a numerical example and an engineering case. Results illustrate that the proposed approach can reasonably approximate solutions obtained from the existing sequential MORO approach (SMORO), while the computational costs are significantly reduced compared with those of SMORO.
Primary user emulation (PUE) attack is one of the main threats affecting cognitive radio (CR) networks. The PUE can forge the same signal as the real primary user (PU) in order to use the licensed channel and cause deny of service (DoS). Therefore, it is important to locate the position of the PUE in order to stop and avoid any further attack. Several techniques have been proposed for localization, including the received signal strength indication RSSI, Triangulation, and Physical Network Layer Coding. However, the area surrounding the real PU is always affected by uncertainty. This uncertainty can be described as a lost (cost) function and conditional probability to be taken into consideration while proclaiming if a PU/PUE is the real PU or not. In this paper, we proposed a combination of a Bayesian model and trilateration technique. In the first part a trilateration technique is used to have a good approximation of the PUE position making use of the RSSI between the anchor nodes and the PU/PUE. In the second part, a Bayesian decision theory is used to claim the legitimacy of the PU based on the lost function and the conditional probability to help to determine the existence of the PUE attacker in the uncertainty area.
There has been a great deal of work on learning new robot skills, but very little consideration of how these newly acquired skills can be integrated into an overall intelligent system. A key aspect of such a system is compositionality: newly learned abilities have to be characterized in a form that will allow them to be flexibly combined with existing abilities, affording a (good!) combinatorial explosion in the robot's abilities. In this paper, we focus on learning models of the preconditions and effects of new parameterized skills, in a form that allows those actions to be combined with existing abilities by a generative planning and execution system.
Incentive-driven advanced attacks have become a major concern to cyber-security. Traditional defense techniques that adopt a passive and static approach by assuming a fixed attack type are insufficient in the face of highly adaptive and stealthy attacks. In particular, a passive defense approach often creates information asymmetry where the attacker knows more about the defender. To this end, moving target defense (MTD) has emerged as a promising way to reverse this information asymmetry. The main idea of MTD is to (continuously) change certain aspects of the system under control to increase the attacker's uncertainty, which in turn increases attack cost/complexity and reduces the chance of a successful exploit in a given amount of time. In this paper, we go one step beyond and show that MTD can be further improved when combined with information disclosure. In particular, we consider that the defender adopts a MTD strategy to protect a critical resource across a network of nodes, and propose a Bayesian Stackelberg game model with the defender as the leader and the attacker as the follower. After fully characterizing the defender's optimal migration strategies, we show that the defender can design a signaling scheme to exploit the uncertainty created by MTD to further affect the attacker's behavior for its own advantage. We obtain conditions under which signaling is useful, and show that strategic information disclosure can be a promising way to further reverse the information asymmetry and achieve more efficient active defense.
The collaborative recommendation mechanism is beneficial for the subject in an open network to find efficiently enough referrers who directly interacted with the object and obtain their trust data. The uncertainty analysis to the collected trust data selects the reliable trust data of trustworthy referrers, and then calculates the statistical trust value on certain reliability for any object. After that the subject can judge its trustworthiness and further make a decision about interaction based on the given threshold. The feasibility of this method is verified by three experiments which are designed to validate the model's ability to fight against malicious service, the exaggeration and slander attack. The interactive success rate is significantly improved by using the new model, and the malicious entities are distinguished more effectively than the comparative model.
Organizations rely on security experts to improve the security of their systems. These professionals use background knowledge and experience to align known threats and vulnerabilities before selecting mitigation options. The substantial depth of expertise in any one area (e.g., databases, networks, operating systems) precludes the possibility that an expert would have complete knowledge about all threats and vulnerabilities. To begin addressing this problem of distributed knowledge, we investigate the challenge of developing a security requirements rule base that mimics human expert reasoning to enable new decision-support systems. In this paper, we show how to collect relevant information from cyber security experts to enable the generation of: (1) interval type-2 fuzzy sets that capture intra- and inter-expert uncertainty around vulnerability levels; and (2) fuzzy logic rules underpinning the decision-making process within the requirements analysis. The proposed method relies on comparative ratings of security requirements in the context of concrete vignettes, providing a novel, interdisciplinary approach to knowledge generation for fuzzy logic systems. The proposed approach is tested by evaluating 52 scenarios with 13 experts to compare their assessments to those of the fuzzy logic decision support system. The initial results show that the system provides reliable assessments to the security analysts, in particular, generating more conservative assessments in 19% of the test scenarios compared to the experts’ ratings.
Human-machine trust is a critical mitigating factor in many HCI instances. Lack of trust in a system can lead to system disuse whilst over-trust can lead to inappropriate use. Whilst human-machine trust has been examined extensively from within a technico-social framework, few efforts have been made to link the dynamics of trust within a steady-state operator-machine environment to the existing literature of the psychology of learning. We set out to recreate a commonly reported learning phenomenon within a trust acquisition environment: Users learning which algorithms can and cannot be trusted to reduce traffic in a city. We failed to replicate (after repeated efforts) the learning phenomena of "blocking", resulting in a finding that people consistently make a very specific error in trust assignment to cues in conditions of uncertainty. This error can be seen as a cognitive bias and has important implications for HCI.
Quality of service (QoS) has been considered as a significant criterion for querying among functionally similar web services. Most researches focus on the search of QoS under certain data which may not cover some practical scenarios. Recent approaches for uncertain QoS of web service deal with discrete data domain. In this paper, we try to build the search of QoS under continuous probability distribution. We offer the definition of two kinds of queries under uncertain QoS and form the optimization approaches for specific distributions. Based on that, the search is extended to general cases. With experiments, we show the feasibility of the proposed methods.