Visible to the public Biblio

Filters: Keyword is Manifolds  [Clear All Filters]
2022-04-18
Bonatti, Piero A., Sauro, Luigi, Langens, Jonathan.  2021.  Representing Consent and Policies for Compliance. 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS PW). :283–291.
Being compliant with the GDPR (and data protection regulations in general) is a difficult task, that calls for manifold, computer-based automated support. In this context, several use cases related to the management and the enforcement of privacy policies and consent call for a machine-understandable policy language, equipped with reliable algorithms for compliance checking and explanations. In this paper, we outline a set of requirements for such languages and algorithms, and address such requirements with a framework based on a profile of OWL2 and a set of policy serializations based on popular formats such as ODRL and JSON. Such ``external'' policy syntax is translated into the ``internal'' OWL2 syntax, thereby enabling semantic compliance checking and explanations using specialized OWL2 reasoners. We provide a precise definition of both the OWL2 profile and the external policy language based on JSON.
2022-03-08
Li, Yangyang, Ji, Yipeng, Li, Shaoning, He, Shulong, Cao, Yinhao, Liu, Yifeng, Liu, Hong, Li, Xiong, Shi, Jun, Yang, Yangchao.  2021.  Relevance-Aware Anomalous Users Detection in Social Network via Graph Neural Network. 2021 International Joint Conference on Neural Networks (IJCNN). :1—8.
Anomalous users detection in social network is an imperative task for security problems. Motivated by the great power of Graph Neural Networks(GNNs), many current researches adopt GNN-based detectors to reveal the anomalous users. However, the increasing scale of social activities, explosive growth of users and manifold technical disguise render the user detection a difficult task. In this paper, we propose an innovate Relevance-aware Anomalous Users Detection model (RAU-GNN) to obtain a fine-grained detection result. RAU-GNN first extracts multiple relations of all types of users in social network, including both benign and anomalous users, and accordingly constructs the multiple user relation graph. Secondly, we employ relevance-aware GNN framework to learn the hidden features of users, and discriminate the anomalous users after discriminating. Concretely, by integrating Graph Convolution Network(GCN) and Graph Attention Network(GAT), we design a GCN-based relation fusion layer to aggregate initial information from different relations, and a GAT-based embedding layer to obtain the high-level embeddings. Lastly, we feed the learned representations to the following GNN layer in order to consolidate the node embedding by aggregating the final users' embeddings. We conduct extensive experiment on real-world datasets. The experimental results show that our approach can achieve high accuracy for anomalous users detection.
2021-09-09
Kolesnikov, A.A., Kuzmenko, A. A..  2020.  Use of ADAR Method and Theory of Optimal Control for Engineering Systems Optimal Control. 2020 International Conference on Industrial Engineering, Applications and Manufacturing (ICIEAM). :1–5.
This paper compares the known method of Analytical Design of Aggregated Regulators (ADAR) with the method of Analytical Design of Optimal Regulators (ADOR). Both equivalence of these methods and the significant difference in the approaches to the analytical synthesis of control laws are shown. It is shown that the ADAR method has significant advantages associated with a simpler and analytical procedure of design of nonlinear laws for optimal control, clear physical representation of weighting factors of optimality criteria, validity and unambiguity of selecting regulator setting parameters, more simple approach to the analysis of the closed-loop system asymptotic stability. These advantages are illustrated by the examples of synthesis.
2020-09-21
Chow, Ka-Ho, Wei, Wenqi, Wu, Yanzhao, Liu, Ling.  2019.  Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks. 2019 IEEE International Conference on Big Data (Big Data). :1282–1291.
Deep neural networks (DNNs) have demonstrated impressive performance on many challenging machine learning tasks. However, DNNs are vulnerable to adversarial inputs generated by adding maliciously crafted perturbations to the benign inputs. As a growing number of attacks have been reported to generate adversarial inputs of varying sophistication, the defense-attack arms race has been accelerated. In this paper, we present MODEF, a cross-layer model diversity ensemble framework. MODEF intelligently combines unsupervised model denoising ensemble with supervised model verification ensemble by quantifying model diversity, aiming to boost the robustness of the target model against adversarial examples. Evaluated using eleven representative attacks on popular benchmark datasets, we show that MODEF achieves remarkable defense success rates, compared with existing defense methods, and provides a superior capability of repairing adversarial inputs and making correct predictions with high accuracy in the presence of black-box attacks.
2020-07-10
Podlesny, Nikolai J., Kayem, Anne V.D.M., Meinel, Christoph.  2019.  Identifying Data Exposure Across Distributed High-Dimensional Health Data Silos through Bayesian Networks Optimised by Multigrid and Manifold. 2019 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech). :556—563.

We present a novel, and use case agnostic method of identifying and circumventing private data exposure across distributed and high-dimensional data repositories. Examples of distributed high-dimensional data repositories include medical research and treatment data, where oftentimes more than 300 describing attributes appear. As such, providing strong guarantees of data anonymity in these repositories is a hard constraint in adhering to privacy legislation. Yet, when applied to distributed high-dimensional data, existing anonymisation algorithms incur high levels of information loss and do not guarantee privacy defeating the purpose of anonymisation. In this paper, we address this issue by using Bayesian networks to handle data transformation for anonymisation. By evaluating every attribute combination to determine the privacy exposure risk, the conditional probability linking attribute pairs is computed. Pairs with a high conditional probability expose the risk of deanonymisation similar to quasi-identifiers and can be separated instead of deleted, as in previous algorithms. Attribute separation removes the risk of privacy exposure, and deletion avoidance results in a significant reduction in information loss. In other words, assimilating the conditional probability of outliers directly in the adjacency matrix in a greedy fashion is quick and thwarts de-anonymisation. Since identifying every privacy violating attribute combination is a W[2]-complete problem, we optimise the procedure with a multigrid solver method by evaluating the conditional probabilities between attribute pairs, and aggregating state space explosion of attribute pairs through manifold learning. Finally, incremental processing of new data is achieved through inexpensive, continuous (delta) learning.