Visible to the public Biblio

Filters: Keyword is Generative Models  [Clear All Filters]
2022-08-12
Killedar, Vinayak, Pokala, Praveen Kumar, Sekhar Seelamantula, Chandra.  2021.  Sparsity Driven Latent Space Sampling for Generative Prior Based Compressive Sensing. ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). :2895—2899.
We address the problem of recovering signals from compressed measurements based on generative priors. Recently, generative-model based compressive sensing (GMCS) methods have shown superior performance over traditional compressive sensing (CS) techniques in recovering signals from fewer measurements. However, it is possible to further improve the performance of GMCS by introducing controlled sparsity in the latent-space. We propose a proximal meta-learning (PML) algorithm to enforce sparsity in the latent-space while training the generator. Enforcing sparsity naturally leads to a union-of-submanifolds model in the solution space. The overall framework is named as sparsity driven latent space sampling (SDLSS). In addition, we derive the sample complexity bounds for the proposed model. Furthermore, we demonstrate the efficacy of the proposed framework over the state-of-the-art techniques with application to CS on standard datasets such as MNIST and CIFAR-10. In particular, we evaluate the performance of the proposed method as a function of the number of measurements and sparsity factor in the latent space using standard objective measures. Our findings show that the sparsity driven latent space sampling approach improves the accuracy and aids in faster recovery of the signal in GMCS.
2021-03-15
Toma, A., Krayani, A., Marcenaro, L., Gao, Y., Regazzoni, C. S..  2020.  Deep Learning for Spectrum Anomaly Detection in Cognitive mmWave Radios. 2020 IEEE 31st Annual International Symposium on Personal, Indoor and Mobile Radio Communications. :1–7.
Millimeter Wave (mmWave) band can be a solution to serve the vast number of Internet of Things (IoT) and Vehicle to Everything (V2X) devices. In this context, Cognitive Radio (CR) is capable of managing the mmWave spectrum sharing efficiently. However, Cognitive mmWave Radios are vulnerable to malicious users due to the complex dynamic radio environment and the shared access medium. This indicates the necessity to implement techniques able to detect precisely any anomalous behaviour in the spectrum to build secure and efficient radios. In this work, we propose a comparison framework between deep generative models: Conditional Generative Adversarial Network (C-GAN), Auxiliary Classifier Generative Adversarial Network (AC-GAN), and Variational Auto Encoder (VAE) used to detect anomalies inside the dynamic radio spectrum. For the sake of the evaluation, a real mmWave dataset is used, and results show that all of the models achieve high probability in detecting spectrum anomalies. Especially, AC-GAN that outperforms C-GAN and VAE in terms of accuracy and probability of detection.
2021-02-01
Mangaokar, N., Pu, J., Bhattacharya, P., Reddy, C. K., Viswanath, B..  2020.  Jekyll: Attacking Medical Image Diagnostics using Deep Generative Models. 2020 IEEE European Symposium on Security and Privacy (EuroS P). :139–157.
Advances in deep neural networks (DNNs) have shown tremendous promise in the medical domain. However, the deep learning tools that are helping the domain, can also be used against it. Given the prevalence of fraud in the healthcare domain, it is important to consider the adversarial use of DNNs in manipulating sensitive data that is crucial to patient healthcare. In this work, we present the design and implementation of a DNN-based image translation attack on biomedical imagery. More specifically, we propose Jekyll, a neural style transfer framework that takes as input a biomedical image of a patient and translates it to a new image that indicates an attacker-chosen disease condition. The potential for fraudulent claims based on such generated `fake' medical images is significant, and we demonstrate successful attacks on both X-rays and retinal fundus image modalities. We show that these attacks manage to mislead both medical professionals and algorithmic detection schemes. Lastly, we also investigate defensive measures based on machine learning to detect images generated by Jekyll.
2020-12-02
Vaka, A., Manasa, G., Sameer, G., Das, B..  2019.  Generation And Analysis Of Trust Networks. 2019 1st International Conference on Advances in Information Technology (ICAIT). :443—448.

Trust is known to be a key component in human social relationships. It is trust that defines human behavior with others to a large extent. Generative models have been extensively used in social networks study to simulate different characteristics and phenomena in social graphs. In this work, an attempt is made to understand how trust in social graphs can be combined with generative modeling techniques to generate trust-based social graphs. These generated social graphs are then compared with the original social graphs to evaluate how trust helps in generative modeling. Two well-known social network data sets i.e. the soc-Bitcoin and the wiki administrator network data sets are used in this work. Social graphs are generated from these data sets and then compared with the original graphs along with other standard generative modeling techniques to see how trust is a good component in this. Other Generative modeling techniques have been available for a while but this investigation with the real social graph data sets validate that trust can be an important factor in generative modeling.

2019-06-17
Verma, Dinesh, Calo, Seraphin, Cirincione, Greg.  2018.  Distributed AI and Security Issues in Federated Environments. Proceedings of the Workshop Program of the 19th International Conference on Distributed Computing and Networking. :4:1–4:6.
Many real-world IoT solutions have to be implemented in a federated environment, which are environments where many different administrative organizations are involved in different parts of the solution. Smarter Cities, Federated Governance, International Trade and Military Coalition Operations are examples of federated environments. As end devices become more capable and intelligent, learning from their environment, and adapting on their own, they expose new types of security vulnerabilities and present an increased attack surface. A distributed AI approach can help mitigate many of the security problems that one may encounter in such federated environments. In this paper, we outline some of the scenarios in which we need to rethink security issues as devices become more intelligent, and discuss how distributed AI techniques can be used to reduce the security exposures in such environments.
2019-01-21
Kos, J., Fischer, I., Song, D..  2018.  Adversarial Examples for Generative Models. 2018 IEEE Security and Privacy Workshops (SPW). :36–42.

We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks. Deep generative models have recently become popular due to their ability to model input data distributions and generate realistic examples from those distributions. We present three classes of attacks on the VAE and VAE-GAN architectures and demonstrate them against networks trained on MNIST, SVHN and CelebA. Our first attack leverages classification-based adversaries by attaching a classifier to the trained encoder of the target generative model, which can then be used to indirectly manipulate the latent representation. Our second attack directly uses the VAE loss function to generate a target reconstruction image from the adversarial example. Our third attack moves beyond relying on classification or the standard loss for the gradient and directly optimizes against differences in source and target latent representations. We also motivate why an attacker might be interested in deploying such techniques against a target generative network.

2018-05-09
Dering, M. L., Tucker, C. S..  2017.  Generative Adversarial Networks for Increasing the Veracity of Big Data. 2017 IEEE International Conference on Big Data (Big Data). :2595–2602.

This work describes how automated data generation integrates in a big data pipeline. A lack of veracity in big data can cause models that are inaccurate, or biased by trends in the training data. This can lead to issues as a pipeline matures that are difficult to overcome. This work describes the use of a Generative Adversarial Network to generate sketch data, such as those that might be used in a human verification task. These generated sketches are verified as recognizable using a crowd-sourcing methodology, and finds that the generated sketches were correctly recognized 43.8% of the time, in contrast to human drawn sketches which were 87.7% accurate. This method is scalable and can be used to generate realistic data in many domains and bootstrap a dataset used for training a model prior to deployment.