Biblio
Inference of unknown opinions with uncertain, adversarial (e.g., incorrect or conflicting) evidence in large datasets is not a trivial task. Without proper handling, it can easily mislead decision making in data mining tasks. In this work, we propose a highly scalable opinion inference probabilistic model, namely Adversarial Collective Opinion Inference (Adv-COI), which provides a solution to infer unknown opinions with high scalability and robustness under the presence of uncertain, adversarial evidence by enhancing Collective Subjective Logic (CSL) which is developed by combining SL and Probabilistic Soft Logic (PSL). The key idea behind the Adv-COI is to learn a model of robust ways against uncertain, adversarial evidence which is formulated as a min-max problem. We validate the out-performance of the Adv-COI compared to baseline models and its competitive counterparts under possible adversarial attacks on the logic-rule based structured data and white and black box adversarial attacks under both clean and perturbed semi-synthetic and real-world datasets in three real world applications. The results show that the Adv-COI generates the lowest mean absolute error in the expected truth probability while producing the lowest running time among all.
As trust becomes increasingly important in the software domain. Due to its complex composite concept, people face great challenges, especially in today's dynamic and constantly changing internet technology. In addition, measuring the software trustworthiness correctly and effectively plays a significant role in gaining users trust in choosing different software. In the context of security, trust is previously measured based on the vulnerability time occurrence to predict the total number of vulnerabilities or their future occurrence time. In this study, we proposed a new unified index called "loss speed index" that integrates the most important variables of software security such as vulnerability occurrence time, number and severity loss, which are used to evaluate the overall software trust measurement. Based on this new definition, a new model called software trustworthy security growth model (STSGM) has been proposed. This paper also aims at filling the gap by addressing the severity of vulnerabilities and proposed a vulnerability severity prediction model, the results are further evaluated by STSGM to estimate the future loss speed index. Our work has several features such as: (1) It is used to predict the vulnerability severity/type in future, (2) Unlike traditional evaluation methods like expert scoring, our model uses historical data to predict the future loss speed of software, (3) The loss metric value is used to evaluate the risk associated with different software, which has a direct impact on software trustworthiness. Experiments performed on real software vulnerability datasets and its results are analyzed to check the correctness and effectiveness of the proposed model.
Efficiently searchable and easily deployable encryption schemes enable an untrusted, legacy service such as a relational database engine to perform searches over encrypted data. The ease with which such schemes can be deployed on top of existing services makes them especially appealing in operational environments where encryption is needed but it is not feasible to replace large infrastructure components like databases or document management systems. Unfortunately all previously known approaches for efficiently searchable and easily deployable encryption are vulnerable to inference attacks where an adversary can use knowledge of the distribution of the data to recover the plaintext with high probability. We present a new efficiently searchable, easily deployable database encryption scheme that is provably secure against inference attacks even when used with real, low-entropy data. We implemented our constructions in Haskell and tested databases up to 10 million records showing our construction properly balances security, deployability and performance.
In Cloud Computing Environment, using only static security measures didn't mitigate the attack considerably. Hence, deployment of sophisticated methods by the attackers to understand the network topology of complex network makes the task easier. For this reason, the use of dynamic security measure as virtual machine (VM) migration increases uncertainty to locate a virtual machine in a dynamic attack surface. Although this, not all VM's migration enhances security. Indeed, the destination server to host the VM should be selected precisely in order to avoid externality and attack at the same time. In this paper, we model migration in cloud environment by using continuous Markov Chain. Then, we analyze the probability of a VM to be compromised based on the destination server parameters. Finally, we provide some numerical results to show the effectiveness of our approach in term of avoiding intrusion.
The analysis of applied tasks and methods of entropy signal processing are carried out in this article. The theoretical comments about the specific schemes of special processors for the determination of probability and correlation activity are given. The perspective of the influence of probabilistic entropy of C. Shannon as cipher signal receivers is reviewed. Examples of entropy-manipulated signals and system characteristics of the proposed special processors are given.
Blockchain networks which employ Proof-of-Work in their consensus mechanism may face inconsistencies in the form of forks. These forks are usually resolved through the application of block selection rules (such as the Nakamoto consensus). In this paper, we investigate the cause and length of forks for the Bitcoin network. We develop theoretical formulas which model the Bitcoin consensus and network protocols, based on an Erdös-Rényi random graph construction of the overlay network of peers. Our theoretical model addresses the effect of key parameters on the fork occurrence probability, such as block propagation delay, network bandwidth, and block size. We also leverage this model to estimate the weight of fork branches. Our model is implemented using the network simulator OMNET++ and validated by historical Bitcoin data. We show that under current conditions, Bitcoin will not benefit from increasing the number of connections per node.
Securing multi-robot teams against malicious activity is crucial as these systems accelerate towards widespread societal integration. This emerging class of ``physical networks'' requires research into new methods of security that exploit their physical nature. This paper derives a theoretical framework for securing multi-agent consensus against the Sybil attack by using the physical properties of wireless transmissions. Our frame-work uses information extracted from the wireless channels to design a switching signal that stochastically excludes potentially untrustworthy transmissions from the consensus. Intuitively, this amounts to selectively ignoring incoming communications from untrustworthy agents, allowing for consensus to the true average to be recovered with high probability if initiated after a certain observation time T0 that we derive. This work is different from previous work in that it allows for arbitrary malicious node values and is insensitive to the initial topology of the network so long as a connected topology over legitimate nodes in the network is feasible. We show that our algorithm will recover consensus and the true graph over the system of legitimate agents with an error rate that vanishes exponentially with time.
The paper describes modification of the ATA (Attack Tree Analysis) technique for assessment of instrumentation and control systems (ICS) dependability (reliability, availability and cyber security) called AvTA (Availability Tree Analysis). The techniques FMEA, FMECA and IMECA applied to carry out preliminary semi-formal and criticality oriented analysis before AvTA based assessment are described. AvTA models combine reliability and cyber security subtrees considering probabilities of ICS recovery in case of hardware (physical) and software (design) failures and attacks on components casing failures. Successful recovery events (SREs) avoid corresponding failures in tree using OR gates if probabilities of SRE for assumed time are more than required. Case for dependability AvTA based assessment (model, availability function and technology of decision-making for choice of component and system parameters) for smart building ICS (Building Automation Systems, BAS) is discussed.
Based on Markov chain analysis method, the situation prediction of smart grid security and stability can be judged in this paper. First component state transition probability matrix and component state prediction were defined. A fast derivation method of Markov state transition probability matrix using in system state prediction was proposed. The Matlab program using this method was compiled to analyze and obtain the future state probability distribution of grid system. As a comparison the system state distribution was simulated based on sequential Monte Carlo method, which was in good agreement with the state transition matrix, and the validity of the method was verified. Furthermore, the situation prediction of the six-node example was analyzed, which provided an effective prediction and analysis tool for the security situation.
This article will consider the probability test of Solovey-Strassen, to determine the simplicity of the number and its possible modifications. This test allows for the shortest possible time to determine whether the number is prime or not. C\# programming language was used to implement the algorithm in practice.
The paper discusses the architectural, algorithmic and computing aspects of creating and operating a class of expert system for managing technological safety of an enterprise, in conditions of a large flow of diagnostic variables. The algorithm for finding a faulty technological chain uses expert information, formed as a set of evidence on the influence of diagnostic variables on the correctness of the technological process. Using the Dempster-Schafer trust function allows determining the overall probability measure on subsets of faulty process chains. To combine different evidence, the orthogonal sums of the base probabilities determined for each evidence are calculated. The procedure described above is converted into the rules of the knowledge base production. The description of the developed prototype of the expert system, its architecture, algorithmic and software is given. The functionality of the expert system and configuration tools for a specific type of production are under discussion.
We propose a crypto-aided Bayesian detection framework for detecting false data in short messages with low overhead. The proposed approach employs the Bayesian detection at the physical layer in parallel with a lightweight cryptographic detection, followed by combining the two detection outcomes. We develop the maximum a posteriori probability (MAP) rule for combining the cryptographic and Bayesian detection outcome, which minimizes the average probability of detection error. We derive the probability of false alarm and missed detection and discuss the improvement of detection accuracy provided by the proposed method.
In cloud computing application scenarios involving computationally weak clients, the natural need for applied cryptography solutions requires the delegation of the most expensive cryptography algorithms to a computationally stronger cloud server. Group exponentiation is an important operation used in many public-key cryptosystems and, more generally, cryptographic protocols. Solving the problem of delegating group exponentiation in the case of a single, possibly malicious, server, was left open since early papers in the area. Only recently, we have solved this problem for a large class of cyclic groups, including those commonly used in cryptosystems proved secure under the intractability of the discrete logarithm problem. In this paper we solve this problem for an important class of non-cyclic groups, which includes RSA groups when the modulus is the product of two safe primes, a common setting in applications using RSA-based cryptosystems. We show a delegation protocol for fixed-exponent exponentiation in such groups, satisfying natural correctness, security, privacy and efficiency requirements, where security holds with exponentially small probability. In our protocol, with very limited offline computation and server computation, a client can delegate an exponentiation to an exponent of the same length as a group element by only performing two exponentiations to an exponent of much shorter length (i.e., the length of a statistical parameter). We obtain our protocol by a non-trivial adaptation to the RSA group of our previous protocol for cyclic groups.