Biblio
To allow fine-grained access control of sensitive data, researchers have proposed various types of functional encryption schemes, such as identity-based encryption, searchable encryption and attribute-based encryption. We observe that it is difficult to define some complex access policies in certain application scenarios by using these schemes individually. In this paper, we attempt to address this problem by proposing a functional encryption approach named Key-Policy Attribute-Based Encryption with Attribute Extension (KP-ABE-AE). In this approach, we utilize extended attributes to integrate various encryption schemes that support different access policies under a common top-level KP-ABE scheme, thus expanding the scope of access policies that can be defined. Theoretical analysis and experimental studies are conducted to demonstrate the applicability of the proposed KP-ABE-AE. We also present an optimization for a special application of KP-ABE-AE where IPE schemes are integrated with a KP-ABE scheme. The optimization results in an integrated scheme with better efficiency when compared to the existing encryption schemes that support the same scope of access policies.
We present a novel, and use case agnostic method of identifying and circumventing private data exposure across distributed and high-dimensional data repositories. Examples of distributed high-dimensional data repositories include medical research and treatment data, where oftentimes more than 300 describing attributes appear. As such, providing strong guarantees of data anonymity in these repositories is a hard constraint in adhering to privacy legislation. Yet, when applied to distributed high-dimensional data, existing anonymisation algorithms incur high levels of information loss and do not guarantee privacy defeating the purpose of anonymisation. In this paper, we address this issue by using Bayesian networks to handle data transformation for anonymisation. By evaluating every attribute combination to determine the privacy exposure risk, the conditional probability linking attribute pairs is computed. Pairs with a high conditional probability expose the risk of deanonymisation similar to quasi-identifiers and can be separated instead of deleted, as in previous algorithms. Attribute separation removes the risk of privacy exposure, and deletion avoidance results in a significant reduction in information loss. In other words, assimilating the conditional probability of outliers directly in the adjacency matrix in a greedy fashion is quick and thwarts de-anonymisation. Since identifying every privacy violating attribute combination is a W[2]-complete problem, we optimise the procedure with a multigrid solver method by evaluating the conditional probabilities between attribute pairs, and aggregating state space explosion of attribute pairs through manifold learning. Finally, incremental processing of new data is achieved through inexpensive, continuous (delta) learning.
Inference of unknown opinions with uncertain, adversarial (e.g., incorrect or conflicting) evidence in large datasets is not a trivial task. Without proper handling, it can easily mislead decision making in data mining tasks. In this work, we propose a highly scalable opinion inference probabilistic model, namely Adversarial Collective Opinion Inference (Adv-COI), which provides a solution to infer unknown opinions with high scalability and robustness under the presence of uncertain, adversarial evidence by enhancing Collective Subjective Logic (CSL) which is developed by combining SL and Probabilistic Soft Logic (PSL). The key idea behind the Adv-COI is to learn a model of robust ways against uncertain, adversarial evidence which is formulated as a min-max problem. We validate the out-performance of the Adv-COI compared to baseline models and its competitive counterparts under possible adversarial attacks on the logic-rule based structured data and white and black box adversarial attacks under both clean and perturbed semi-synthetic and real-world datasets in three real world applications. The results show that the Adv-COI generates the lowest mean absolute error in the expected truth probability while producing the lowest running time among all.
The analysis of applied tasks and methods of entropy signal processing are carried out in this article. The theoretical comments about the specific schemes of special processors for the determination of probability and correlation activity are given. The perspective of the influence of probabilistic entropy of C. Shannon as cipher signal receivers is reviewed. Examples of entropy-manipulated signals and system characteristics of the proposed special processors are given.
In recent days, Enterprises are expanding their business efficiently through web applications which has paved the way for building good consumer relationship with its customers. The major threat faced by these enterprises is their inability to provide secure environments as the web applications are prone to severe vulnerabilities. As a result of this, many security standards and tools have been evolving to handle the vulnerabilities. Though there are many vulnerability detection tools available in the present, they do not provide sufficient information on the attack. For the long-term functioning of an organization, data along with efficient analytics on the vulnerabilities is required to enhance its reliability. The proposed model thus aims to make use of Machine Learning with Analytics to solve the problem in hand. Hence, the sequence of the attack is detected through the pattern using PAA and further the detected vulnerabilities are classified using Machine Learning technique such as SVM. Probabilistic results are provided in order to obtain numerical data sets which could be used for obtaining a report on user and application behavior. Dynamic and Reconfigurable PAA with SVM Classifier is a challenging task to analyze the vulnerabilities and impact of these vulnerabilities in heterogeneous web environment. This will enhance the former processing by analysis of the origin and the pattern of the attack in a more effective manner. Hence, the proposed system is designed to perform detection of attacks. The system works on the mitigation and prevention as part of the attack prediction.
This article will consider the probability test of Solovey-Strassen, to determine the simplicity of the number and its possible modifications. This test allows for the shortest possible time to determine whether the number is prime or not. C\# programming language was used to implement the algorithm in practice.
This paper focuses on optimizing the sigmoid filter for detecting Low-Rate DoS attacks. Though sigmoid filter could help for detecting the attacker, it could severely affect the network efficiency. Unlike high rate attacks, Low-Rate DoS attacks such as ``Shrew'' and ``New Shrew'' are hard to detect. Attackers choose a malicious low-rate bandwidth to exploit the TCP's congestion control window algorithm and the re-transition timeout mechanism. We simulated the attacker traffic by editing using NS3. The Sigmoid filter was used to create a threshold bandwidth filter at the router that allowed a specific bandwidth, so when traffic that exceeded the threshold occurred, it would be dropped, or it would be redirected to a honey-pot server, instead. We simulated the Sigmoid filter using MATLAB and took the attacker's and legitimate user's traffic generated by NS-3 as the input for the Sigmoid filter in the MATLAB. We run the experiment three times with different threshold values correlated to the TCP packet size. We found the probability to detect the attacker traffic as follows: the first was 25%, the second 50% and the third 60%. However, we observed a drop in legitimate user traffic with the following probabilities, respectively: 75%, 50%, and 85%.
In this paper, we propose a CPA-Secure encryption scheme with equality test. Unlike other public key solutions, in our scheme, only the data owner can encrypt the message and get the comparable ciphertext, and only the tester with token who can perform the equality test. Our encryption scheme is based on multiplicative homomorphism of ElGamal Encryption and Non Interactive Zero Knowledge proof of Discrete Log. We proof that the proposed scheme is OW-CPA security under the attack of the adversary who has equality test token, and IND-CPA security under the attack of adversary who can not test the equality. The proposed scheme only suppose to compare two ciphertexts encrypted by same user, though it is less of flexibility, it is efficient and more suitable for data outsourcing scenario.
Quantitative risk assessment is a critical first step in risk management and assured design of networked computer systems. It is challenging to evaluate the marginal probabilities of target states/conditions when using a probabilistic attack graph to represent all possible attack paths and the probabilistic cause-consequence relations among nodes. The brute force approach has the exponential complexity and the belief propagation method gives approximation when the corresponding factor graph has cycles. To improve the approximation accuracy, a region-based method is adopted, which clusters some highly dependent nodes into regions and messages are passed among regions. Experiments are conducted to compare the performance of the different methods.
There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.
This paper offers a new approach to modelling the effect of cyber-attacks on reliability of software used in industrial control applications. The model is based on the view that successful cyber-attacks introduce failure regions, which are not present in non-compromised software. The model is then extended to cover a fault tolerant architecture, such as the 1-out-of-2 software, popular for building industrial protection systems. The model is used to study the effectiveness of software maintenance policies such as patching and "cleansing" ("proactive recovery") under different adversary models ranging from independent attacks to sophisticated synchronized attacks on the channels. We demonstrate that the effect of attacks on reliability of diverse software significantly depends on the adversary model. Under synchronized attacks system reliability may be more than an order of magnitude worse than under independent attacks on the channels. These findings, although not surprising, highlight the importance of using an adequate adversary model in the assessment of how effective various cyber-security controls are.
Distributed storage systems and caching systems are becoming widespread, and this motivates the increasing interest on assessing their achievable performance in terms of reliability for legitimate users and security against malicious users. While the assessment of reliability takes benefit of the availability of well established metrics and tools, assessing security is more challenging. The classical cryptographic approach aims at estimating the computational effort for an attacker to break the system, and ensuring that it is far above any feasible amount. This has the limitation of depending on attack algorithms and advances in computing power. The information-theoretic approach instead exploits capacity measures to achieve unconditional security against attackers, but often does not provide practical recipes to reach such a condition. We propose a mixed cryptographic/information-theoretic approach with a twofold goal: estimating the levels of information-theoretic security and defining a practical scheme able to achieve them. In order to find optimal choices of the parameters of the proposed scheme, we exploit an effective probabilistic model checker, which allows us to overcome several limitations of more conventional methods.
One of the key objectives of distributed denial of service (DDoS) attack on the smart grid advanced metering infrastructure is to threaten the availability of end user's metering data. This will surely disrupt the smooth operations of the grid and third party operators who need this data for billing and other grid control purposes. In previous work, we proposed a cloud-based Openflow firewall for mitigation against DDoS attack in a smart grid AMI. In this paper, PRISM model checker is used to perform a probabilistic best-and worst-case analysis of the firewall with regard to DDoS attack success under different firewall detection probabilities ranging from zero to 1. The results from this quantitative analysis can be useful in determining the extent the DDoS attack can undermine the correctness and performance of the firewall. In addition, the study can also be helpful in knowing the extent the firewall can be improved by applying the knowledge derived from the worst-case performance of the firewall.
Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.
In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.
We are witnessing a huge growth of cyber-physical systems, which are autonomous, mobile, endowed with sensing, controlled by software, and often wirelessly connected and Internet-enabled. They include factory automation systems, robotic assistants, self-driving cars, and wearable and implantable devices. Since they are increasingly often used in safety- or business-critical contexts, to mention invasive treatment or biometric authentication, there is an urgent need for modelling and verification technologies to support the design process, and hence improve the reliability and reduce production costs. This paper gives an overview of quantitative verification and synthesis techniques developed for cyber-physical systems, summarising recent achievements and future challenges in this important field.