Visible to the public Biblio

Filters: Keyword is Probabilistic logic  [Clear All Filters]
2020-09-11
Spradling, Matthew, Allison, Mark, Tsogbadrakh, Tsenguun, Strong, Jay.  2019.  Toward Limiting Social Botnet Effectiveness while Detection Is Performed: A Probabilistic Approach. 2019 International Conference on Computational Science and Computational Intelligence (CSCI). :1388—1391.
The prevalence of social botnets has increased public distrust of social media networks. Current methods exist for detecting bot activity on Twitter, Reddit, Facebook, and other social media platforms. Most of these detection methods rely upon observing user behavior for a period of time. Unfortunately, the behavior observation period allows time for a botnet to successfully propagate one or many posts before removal. In this paper, we model the post propagation patterns of normal users and social botnets. We prove that a botnet may exploit deterministic propagation actions to elevate a post even with a small botnet population. We propose a probabilistic model which can limit the impact of social media botnets until they can be detected and removed. While our approach maintains expected results for non-coordinated activity, coordinated botnets will be detected before propagation with high probability.
2020-08-17
Hu, Jianxing, Huo, Dongdong, Wang, Meilin, Wang, Yazhe, Zhang, Yan, Li, Yu.  2019.  A Probability Prediction Based Mutable Control-Flow Attestation Scheme on Embedded Platforms. 2019 18th IEEE International Conference On Trust, Security And Privacy In Computing And Communications/13th IEEE International Conference On Big Data Science And Engineering (TrustCom/BigDataSE). :530–537.
Control-flow attacks cause powerful threats to the software integrity. Remote attestation for control flow is a crucial security service for ensuring the software integrity on embedded platforms. The fine-grained remote control-flow attestation with execution-profiling Control-Flow Graph (CFG) is applied to defend against control-flow attacks. It is a safe scheme but it may influence the runtime efficiency. In fact, we find out only the vulnerable parts of a program need being attested at costly fine-grained level to ensure the security, and the remaining normal parts just need a lightweight coarse-grained check to reduce the overhead. We propose Mutable Granularity Control-Flow Attestation (MGC-FA) scheme, which bases on a probabilistic model, to distinguish between the vulnerable and normal parts in the program and combine fine-grained and coarse-grained control-flow attestation schemes. MGC-FA employs the execution-profiling CFG to apply the remote control-flow attestation scheme on embedded devices. MGC-FA is implemented on Raspberry Pi with ARM TrustZone and the experimental results show its effect on balancing the relationship between runtime efficiency and control-flow security.
2020-07-24
Huo, Weiqian, Pei, Jisheng, Zhang, Ke, Ye, Xiaojun.  2014.  KP-ABE with Attribute Extension: Towards Functional Encryption Schemes Integration. 2014 Sixth International Symposium on Parallel Architectures, Algorithms and Programming. :230—237.

To allow fine-grained access control of sensitive data, researchers have proposed various types of functional encryption schemes, such as identity-based encryption, searchable encryption and attribute-based encryption. We observe that it is difficult to define some complex access policies in certain application scenarios by using these schemes individually. In this paper, we attempt to address this problem by proposing a functional encryption approach named Key-Policy Attribute-Based Encryption with Attribute Extension (KP-ABE-AE). In this approach, we utilize extended attributes to integrate various encryption schemes that support different access policies under a common top-level KP-ABE scheme, thus expanding the scope of access policies that can be defined. Theoretical analysis and experimental studies are conducted to demonstrate the applicability of the proposed KP-ABE-AE. We also present an optimization for a special application of KP-ABE-AE where IPE schemes are integrated with a KP-ABE scheme. The optimization results in an integrated scheme with better efficiency when compared to the existing encryption schemes that support the same scope of access policies.

2020-07-20
Masood, Raziqa, Pandey, Nitin, Rana, Q. P..  2017.  An approach of dredging the interconnected nodes and repudiating attacks in cloud network. 2017 4th IEEE Uttar Pradesh Section International Conference on Electrical, Computer and Electronics (UPCON). :49–53.
In cloud computing environment, there are malignant nodes which create a huge problem to transfer data in communication. As there are so many models to prevent the data over the network, here we try to prevent or make secure to the network by avoiding mallicious nodes in between the communication. So the probabiliostic approach what we use here is a coherent tool to supervise the security challenges in the cloud environment. The matter of security for cloud computing is a superficial quality of service from cloud service providers. Even, cloud computing dealing everyday with new challenges, which is in process to well investigate. This research work draws the light on aspect regarding with the cloud data transmission and security by identifying the malignanat nodes in between the communication. Cloud computing network shared the common pool of resources like hardware, framework, platforms and security mechanisms. therefore Cloud Computing cache the information and deliver the secure transaction of data, so privacy and security has become the bone of contention which hampers the process to execute safely. To ensure the security of data in cloud environment, we proposed a method by implementing white box cryptography on RSA algorithm and then we work on the network, and find the malignant nodes which hampering the communication by hitting each other in the network. Several existing security models already have been deployed with security attacks. A probabilistic authentication and authorization approach is introduced to overcome this attack easily. It observes corrupted nodes before hitting with maximum probability. here we use a command table to conquer the malignant nodes. then we do the comparative study and it shows the probabilistic authentication and authorization protocol gives the performance much better than the old ones.
2020-07-13
Grüner, Andreas, Mühle, Alexander, Meinel, Christoph.  2019.  Using Probabilistic Attribute Aggregation for Increasing Trust in Attribute Assurance. 2019 IEEE Symposium Series on Computational Intelligence (SSCI). :633–640.
Identity management is an essential cornerstone of securing online services. Service provisioning relies on correct and valid attributes of a digital identity. Therefore, the identity provider is a trusted third party with a specific trust requirement towards a verified attribute supply. This trust demand implies a significant dependency on users and service providers. We propose a novel attribute aggregation method to reduce the reliance on one identity provider. Trust in an attribute is modelled as a combined assurance of several identity providers based on probability distributions. We formally describe the proposed aggregation model. The resulting trust model is implemented in a gateway that is used for authentication with self-sovereign identity solutions. Thereby, we devise a service provider specific web of trust that constitutes an intermediate approach bridging a global hierarchical model and a locally decentralized peer to peer scheme.
2020-07-10
Podlesny, Nikolai J., Kayem, Anne V.D.M., Meinel, Christoph.  2019.  Identifying Data Exposure Across Distributed High-Dimensional Health Data Silos through Bayesian Networks Optimised by Multigrid and Manifold. 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :556—563.

We present a novel, and use case agnostic method of identifying and circumventing private data exposure across distributed and high-dimensional data repositories. Examples of distributed high-dimensional data repositories include medical research and treatment data, where oftentimes more than 300 describing attributes appear. As such, providing strong guarantees of data anonymity in these repositories is a hard constraint in adhering to privacy legislation. Yet, when applied to distributed high-dimensional data, existing anonymisation algorithms incur high levels of information loss and do not guarantee privacy defeating the purpose of anonymisation. In this paper, we address this issue by using Bayesian networks to handle data transformation for anonymisation. By evaluating every attribute combination to determine the privacy exposure risk, the conditional probability linking attribute pairs is computed. Pairs with a high conditional probability expose the risk of deanonymisation similar to quasi-identifiers and can be separated instead of deleted, as in previous algorithms. Attribute separation removes the risk of privacy exposure, and deletion avoidance results in a significant reduction in information loss. In other words, assimilating the conditional probability of outliers directly in the adjacency matrix in a greedy fashion is quick and thwarts de-anonymisation. Since identifying every privacy violating attribute combination is a W[2]-complete problem, we optimise the procedure with a multigrid solver method by evaluating the conditional probabilities between attribute pairs, and aggregating state space explosion of attribute pairs through manifold learning. Finally, incremental processing of new data is achieved through inexpensive, continuous (delta) learning.

2020-04-17
Alim, Adil, Zhao, Xujiang, Cho, Jin-Hee, Chen, Feng.  2019.  Uncertainty-Aware Opinion Inference Under Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :6—15.

Inference of unknown opinions with uncertain, adversarial (e.g., incorrect or conflicting) evidence in large datasets is not a trivial task. Without proper handling, it can easily mislead decision making in data mining tasks. In this work, we propose a highly scalable opinion inference probabilistic model, namely Adversarial Collective Opinion Inference (Adv-COI), which provides a solution to infer unknown opinions with high scalability and robustness under the presence of uncertain, adversarial evidence by enhancing Collective Subjective Logic (CSL) which is developed by combining SL and Probabilistic Soft Logic (PSL). The key idea behind the Adv-COI is to learn a model of robust ways against uncertain, adversarial evidence which is formulated as a min-max problem. We validate the out-performance of the Adv-COI compared to baseline models and its competitive counterparts under possible adversarial attacks on the logic-rule based structured data and white and black box adversarial attacks under both clean and perturbed semi-synthetic and real-world datasets in three real world applications. The results show that the Adv-COI generates the lowest mean absolute error in the expected truth probability while producing the lowest running time among all.

2020-03-09
Tun, Hein, Lupin, Sergey, Than, Ba Hla, Nay Zaw Linn, Kyaw, Khaing, Min Thu.  2019.  Estimation of Information System Security Using Hybrid Simulation in AnyLogic. 2019 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus). :1829–1834.
Nowadays the role of Information systems in our life has greatly increased, which has become one of the biggest challenges for citizens, organizations and governments. Every single day we are becoming more and more dependent on information and communication technology (ICT). A major goal of information security is to find the best ways to mitigate the risks. The context-role and perimeter protection approaches can reduce and prevent an unauthorized penetration to protected zones and information systems inside the zones. The result of this work can be useful for the security system analysis and optimization of their organizations.
Gregory, Jason M., Al-Hussaini, Sarah, Gupta, Satyandra K..  2019.  Heuristics-Based Multi-Agent Task Allocation for Resilient Operations. 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR). :1–8.
Multi-Agent Task Allocation is a pre-requisite for many autonomous, real-world systems because of the need for intelligent task assignment amongst a team for maximum efficiency. Similarly, agent failure, task, failure, and a lack of state information are inherent challenges when operating in complex environments. Many existing solutions make simplifying assumptions regarding the modeling of these factors, e.g., Markovian state information. However, it is not clear that this is always the appropriate approach or that results from these approaches are necessarily representative of performance in the natural world. In this work, we demonstrate that there exists a class of problems for which non-Markovian state modeling is beneficial. Furthermore, we present and characterize a novel heuristic for task allocation that incorporates realistic state and uncertainty modeling in order to improve performance. Our quantitative analysis, when tested in a simulated search and rescue (SAR) mission, shows a decrease in performance of more than 57% when a representative method with Markovian assumptions is tested in a non-Markovian setting. Our novel heuristic has shown an improvement in performance of 3-15%, in the same non-Markovian setting, by modeling probabilistic failure and making fewer assumptions.
2020-03-04
Voronych, Artur, Nyckolaychuk, Lyubov, Vozna, Nataliia, Pastukh, Taras.  2019.  Methods and Special Processors of Entropy Signal Processing. 2019 IEEE 15th International Conference on the Experience of Designing and Application of CAD Systems (CADSM). :1–4.

The analysis of applied tasks and methods of entropy signal processing are carried out in this article. The theoretical comments about the specific schemes of special processors for the determination of probability and correlation activity are given. The perspective of the influence of probabilistic entropy of C. Shannon as cipher signal receivers is reviewed. Examples of entropy-manipulated signals and system characteristics of the proposed special processors are given.

2020-02-26
Padmanaban, R., Thirumaran, M., Sanjana, Victoria, Moshika, A..  2019.  Security Analytics For Heterogeneous Web. 2019 IEEE International Conference on System, Computation, Automation and Networking (ICSCAN). :1–6.

In recent days, Enterprises are expanding their business efficiently through web applications which has paved the way for building good consumer relationship with its customers. The major threat faced by these enterprises is their inability to provide secure environments as the web applications are prone to severe vulnerabilities. As a result of this, many security standards and tools have been evolving to handle the vulnerabilities. Though there are many vulnerability detection tools available in the present, they do not provide sufficient information on the attack. For the long-term functioning of an organization, data along with efficient analytics on the vulnerabilities is required to enhance its reliability. The proposed model thus aims to make use of Machine Learning with Analytics to solve the problem in hand. Hence, the sequence of the attack is detected through the pattern using PAA and further the detected vulnerabilities are classified using Machine Learning technique such as SVM. Probabilistic results are provided in order to obtain numerical data sets which could be used for obtaining a report on user and application behavior. Dynamic and Reconfigurable PAA with SVM Classifier is a challenging task to analyze the vulnerabilities and impact of these vulnerabilities in heterogeneous web environment. This will enhance the former processing by analysis of the origin and the pattern of the attack in a more effective manner. Hence, the proposed system is designed to perform detection of attacks. The system works on the mitigation and prevention as part of the attack prediction.

2020-01-20
Sun, Xiaoyan, Dai, Jun, Liu, Peng, Singhal, Anoop, Yen, John.  2016.  Towards probabilistic identification of zero-day attack paths. 2016 IEEE Conference on Communications and Network Security (CNS). :64–72.
Zero-day attacks continue to challenge the enterprise network security defense. A zero-day attack path is formed when a multi-step attack contains one or more zero-day exploits. Detecting zero-day attack paths in time could enable early disclosure of zero-day threats. In this paper, we propose a probabilistic approach to identify zero-day attack paths and implement a prototype system named ZePro. An object instance graph is first built from system calls to capture the intrusion propagation. To further reveal the zero-day attack paths hiding in the instance graph, our system constructs an instance-graph-based Bayesian network. By leveraging intrusion evidence, the Bayesian network can quantitatively compute the probabilities of object instances being infected. The object instances with high infection probabilities reveal themselves and form the zero-day attack paths. The experiment results show that our system can effectively identify zero-day attack paths.
Myzdrikov, Nikita Ye., Semeonov, Ivan Ye., Yukhnov, Vasiliy I., Safaryan, Olga A., Reshetnikova, Irina V., Lobodenko, Andrey G., Cherckesova, Larissa V., Porksheyan, Vitaliy M..  2019.  Modification and Optimization of Solovey-Strassen's Fast Exponentiation Probablistic Test Binary Algorithm. 2019 IEEE East-West Design Test Symposium (EWDTS). :1–3.

This article will consider the probability test of Solovey-Strassen, to determine the simplicity of the number and its possible modifications. This test allows for the shortest possible time to determine whether the number is prime or not. C\# programming language was used to implement the algorithm in practice.

2019-03-11
Raj, R. V., Balasubramanian, K., Nandhini, T..  2018.  Establishing Trust by Detecting Malicious Nodes in Delay Tolerant Network. 2018 2nd International Conference on Trends in Electronics and Informatics (ICOEI). :1385–1390.
A Network consists of many nodes among which there may be a presence of misbehavior nodes. Delay Tolerant Network (DTN) is a network where the disconnections occur frequently. Store, carry and forward method is followed in DTN. The serious threat against routing in DTN is the selfish behavior. The main intention of selfish node is to save its own energy. Detecting the selfish node in DTN is very difficult. In this paper, a probabilistic misbehavior detection scheme called MAXTRUST has been proposed. Trusted Authority (TA) has been introduced in order to detect the behavior of the nodes periodically based on the task, forwarding history and contact history evidence. After collecting all the evidences from the nodes, the TA would check the inspection node about its behavior. The actions such as punishment or compensation would be given to that particular node based on its behavior. The TA performs probabilistic checking, in order to ensure security at a reduced cost. To further improve the efficiency, dynamic probabilistic inspection has been demonstrated using game theory analysis. The simulation results show the effectiveness and efficiency of the MAXTRUST scheme.
2018-11-19
Rabie, R., Drissi, M..  2018.  Applying Sigmoid Filter for Detecting the Low-Rate Denial of Service Attacks. 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC). :450–456.

This paper focuses on optimizing the sigmoid filter for detecting Low-Rate DoS attacks. Though sigmoid filter could help for detecting the attacker, it could severely affect the network efficiency. Unlike high rate attacks, Low-Rate DoS attacks such as ``Shrew'' and ``New Shrew'' are hard to detect. Attackers choose a malicious low-rate bandwidth to exploit the TCP's congestion control window algorithm and the re-transition timeout mechanism. We simulated the attacker traffic by editing using NS3. The Sigmoid filter was used to create a threshold bandwidth filter at the router that allowed a specific bandwidth, so when traffic that exceeded the threshold occurred, it would be dropped, or it would be redirected to a honey-pot server, instead. We simulated the Sigmoid filter using MATLAB and took the attacker's and legitimate user's traffic generated by NS-3 as the input for the Sigmoid filter in the MATLAB. We run the experiment three times with different threshold values correlated to the TCP packet size. We found the probability to detect the attacker traffic as follows: the first was 25%, the second 50% and the third 60%. However, we observed a drop in legitimate user traffic with the following probabilities, respectively: 75%, 50%, and 85%.

2018-09-28
Jiang, H., Xu, Q., Liu, C., Liu, Z..  2017.  An Efficient CPA-Secure Encryption Scheme with Equality Test. 2017 IEEE International Conference on Computational Science and Engineering (CSE) and IEEE International Conference on Embedded and Ubiquitous Computing (EUC). 2:38–45.

In this paper, we propose a CPA-Secure encryption scheme with equality test. Unlike other public key solutions, in our scheme, only the data owner can encrypt the message and get the comparable ciphertext, and only the tester with token who can perform the equality test. Our encryption scheme is based on multiplicative homomorphism of ElGamal Encryption and Non Interactive Zero Knowledge proof of Discrete Log. We proof that the proposed scheme is OW-CPA security under the attack of the adversary who has equality test token, and IND-CPA security under the attack of adversary who can not test the equality. The proposed scheme only suppose to compare two ciphertexts encrypted by same user, though it is less of flexibility, it is efficient and more suitable for data outsourcing scenario.

2018-09-05
Pasareanu, C..  2017.  Symbolic execution and probabilistic reasoning. 2017 32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). :1–1.
Summary form only given. Symbolic execution is a systematic program analysis technique which explores multiple program behaviors all at once by collecting and solving symbolic path conditions over program paths. The technique has been recently extended with probabilistic reasoning. This approach computes the conditions to reach target program events of interest and uses model counting to quantify the fraction of the input domain satisfying these conditions thus computing the probability of event occurrence. This probabilistic information can be used for example to compute the reliability of an aircraft controller under different wind conditions (modeled probabilistically) or to quantify the leakage of sensitive data in a software system, using information theory metrics such as Shannon entropy. In this talk we review recent advances in symbolic execution and probabilistic reasoning and we discuss how they can be used to ensure the safety and security of software systems.
2018-04-02
Cheng, Q., Kwiat, K., Kamhoua, C. A., Njilla, L..  2017.  Attack Graph Based Network Risk Assessment: Exact Inference vs Region-Based Approximation. 2017 IEEE 18th International Symposium on High Assurance Systems Engineering (HASE). :84–87.

Quantitative risk assessment is a critical first step in risk management and assured design of networked computer systems. It is challenging to evaluate the marginal probabilities of target states/conditions when using a probabilistic attack graph to represent all possible attack paths and the probabilistic cause-consequence relations among nodes. The brute force approach has the exponential complexity and the belief propagation method gives approximation when the corresponding factor graph has cycles. To improve the approximation accuracy, a region-based method is adopted, which clusters some highly dependent nodes into regions and messages are passed among regions. Experiments are conducted to compare the performance of the different methods.

2018-03-26
Abuein, Q., Shatnawi, A., Al-Sheyab, H..  2017.  Trusted Recomendation System Based on Level of Trust(TRS_LoT). 2017 International Conference on Engineering and Technology (ICET). :1–5.

There are vast amounts of information in our world. Accessing the most accurate information in a speedy way is becoming more difficult and complicated. A lot of relevant information gets ignored which leads to much duplication of work and effort. The focuses tend to provide rapid and intelligent retrieval systems. Information retrieval (IR) is the process of searching for information that is related to some topics of interest. Due to the massive search results, the user will normally have difficulty in identifying the relevant ones. To alleviate this problem, a recommendation system is used. A recommendation system is a sort of filtering information system, which predicts the relevance of retrieved information to the user's needs according to some criteria. Hence, it can provide the user with the results that best fit their needs. The services provided through the web normally provide massive information about any requested item or service. An efficient recommendation system is required to classify this information result. A recommendation system can be further improved if augmented with a level of trust information. That is, recommendations are ranked according to their level of trust. In our research, we produced a recommendation system combined with an efficient level of trust system to guarantee that the posts, comments and feedbacks from users are trusted. We customized the concept of LoT (Level of Trust) [1] since it can cover medical, shopping and learning through social media. The proposed system TRS\_LoT provides trusted recommendations to the users with a high percentage of accuracy. Whereas a 300 post with more than 5000 comments from ``Amazon'' was selected to be used as a dataset, the experiment has been conducted by using same dataset based on ``post rating''.

2018-03-19
Popov, P..  2017.  Models of Reliability of Fault-Tolerant Software Under Cyber-Attacks. 2017 IEEE 28th International Symposium on Software Reliability Engineering (ISSRE). :228–239.

This paper offers a new approach to modelling the effect of cyber-attacks on reliability of software used in industrial control applications. The model is based on the view that successful cyber-attacks introduce failure regions, which are not present in non-compromised software. The model is then extended to cover a fault tolerant architecture, such as the 1-out-of-2 software, popular for building industrial protection systems. The model is used to study the effectiveness of software maintenance policies such as patching and "cleansing" ("proactive recovery") under different adversary models ranging from independent attacks to sophisticated synchronized attacks on the channels. We demonstrate that the effect of attacks on reliability of diverse software significantly depends on the adversary model. Under synchronized attacks system reliability may be more than an order of magnitude worse than under independent attacks on the channels. These findings, although not surprising, highlight the importance of using an adequate adversary model in the assessment of how effective various cyber-security controls are.

2018-03-05
Baldi, M., Chiaraluce, F., Senigagliesi, L., Spalazzi, L., Spegni, F..  2017.  Security in Heterogeneous Distributed Storage Systems: A Practically Achievable Information-Theoretic Approach. 2017 IEEE Symposium on Computers and Communications (ISCC). :1021–1028.

Distributed storage systems and caching systems are becoming widespread, and this motivates the increasing interest on assessing their achievable performance in terms of reliability for legitimate users and security against malicious users. While the assessment of reliability takes benefit of the availability of well established metrics and tools, assessing security is more challenging. The classical cryptographic approach aims at estimating the computational effort for an attacker to break the system, and ensuring that it is far above any feasible amount. This has the limitation of depending on attack algorithms and advances in computing power. The information-theoretic approach instead exploits capacity measures to achieve unconditional security against attackers, but often does not provide practical recipes to reach such a condition. We propose a mixed cryptographic/information-theoretic approach with a twofold goal: estimating the levels of information-theoretic security and defining a practical scheme able to achieve them. In order to find optimal choices of the parameters of the proposed scheme, we exploit an effective probabilistic model checker, which allows us to overcome several limitations of more conventional methods.

2018-02-21
Diovu, R. C., Agee, J. T..  2017.  Quantitative analysis of firewall security under DDoS attacks in smart grid AMI networks. 2017 IEEE 3rd International Conference on Electro-Technology for National Development (NIGERCON). :696–701.

One of the key objectives of distributed denial of service (DDoS) attack on the smart grid advanced metering infrastructure is to threaten the availability of end user's metering data. This will surely disrupt the smooth operations of the grid and third party operators who need this data for billing and other grid control purposes. In previous work, we proposed a cloud-based Openflow firewall for mitigation against DDoS attack in a smart grid AMI. In this paper, PRISM model checker is used to perform a probabilistic best-and worst-case analysis of the firewall with regard to DDoS attack success under different firewall detection probabilities ranging from zero to 1. The results from this quantitative analysis can be useful in determining the extent the DDoS attack can undermine the correctness and performance of the firewall. In addition, the study can also be helpful in knowing the extent the firewall can be improved by applying the knowledge derived from the worst-case performance of the firewall.

2018-02-15
Ni, J., Cheng, W., Zhang, K., Song, D., Yan, T., Chen, H., Zhang, X..  2017.  Ranking Causal Anomalies by Modeling Local Propagations on Networked Systems. 2017 IEEE International Conference on Data Mining (ICDM). :1003–1008.

Complex systems are prevalent in many fields such as finance, security and industry. A fundamental problem in system management is to perform diagnosis in case of system failure such that the causal anomalies, i.e., root causes, can be identified for system debugging and repair. Recently, invariant network has proven a powerful tool in characterizing complex system behaviors. In an invariant network, a node represents a system component, and an edge indicates a stable interaction between two components. Recent approaches have shown that by modeling fault propagation in the invariant network, causal anomalies can be effectively discovered. Despite their success, the existing methods have a major limitation: they typically assume there is only a single and global fault propagation in the entire network. However, in real-world large-scale complex systems, it's more common for multiple fault propagations to grow simultaneously and locally within different node clusters and jointly define the system failure status. Inspired by this key observation, we propose a two-phase framework to identify and rank causal anomalies. In the first phase, a probabilistic clustering is performed to uncover impaired node clusters in the invariant network. Then, in the second phase, a low-rank network diffusion model is designed to backtrack causal anomalies in different impaired clusters. Extensive experimental results on real-life datasets demonstrate the effectiveness of our method.

2018-02-06
MüUller, W., Kuwertz, A., Mühlenberg, D., Sander, J..  2017.  Semantic Information Fusion to Enhance Situational Awareness in Surveillance Scenarios. 2017 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI). :397–402.

In recent years, the usage of unmanned aircraft systems (UAS) for security-related purposes has increased, ranging from military applications to different areas of civil protection. The deployment of UAS can support security forces in achieving an enhanced situational awareness. However, in order to provide useful input to a situational picture, sensor data provided by UAS has to be integrated with information about the area and objects of interest from other sources. The aim of this study is to design a high-level data fusion component combining probabilistic information processing with logical and probabilistic reasoning, to support human operators in their situational awareness and improving their capabilities for making efficient and effective decisions. To this end, a fusion component based on the ISR (Intelligence, Surveillance and Reconnaissance) Analytics Architecture (ISR-AA) [1] is presented, incorporating an object-oriented world model (OOWM) for information integration, an expressive knowledge model and a reasoning component for detection of critical events. Approaches for translating the information contained in the OOWM into either an ontology for logical reasoning or a Markov logic network for probabilistic reasoning are presented.

2017-12-28
Kwiatkowska, M..  2016.  Advances and challenges of quantitative verification and synthesis for cyber-physical systems. 2016 Science of Security for Cyber-Physical Systems Workshop (SOSCYPS). :1–5.

We are witnessing a huge growth of cyber-physical systems, which are autonomous, mobile, endowed with sensing, controlled by software, and often wirelessly connected and Internet-enabled. They include factory automation systems, robotic assistants, self-driving cars, and wearable and implantable devices. Since they are increasingly often used in safety- or business-critical contexts, to mention invasive treatment or biometric authentication, there is an urgent need for modelling and verification technologies to support the design process, and hence improve the reliability and reduce production costs. This paper gives an overview of quantitative verification and synthesis techniques developed for cyber-physical systems, summarising recent achievements and future challenges in this important field.