Visible to the public Biblio

Filters: Keyword is Deep-learning  [Clear All Filters]
2021-12-20
Nasr, Milad, Songi, Shuang, Thakurta, Abhradeep, Papemoti, Nicolas, Carlin, Nicholas.  2021.  Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning. 2021 IEEE Symposium on Security and Privacy (SP). :866–882.
Differentially private (DP) machine learning allows us to train models on private data while limiting data leakage. DP formalizes this data leakage through a cryptographic game, where an adversary must predict if a model was trained on a dataset D, or a dataset D′ that differs in just one example. If observing the training algorithm does not meaningfully increase the adversary's odds of successfully guessing which dataset the model was trained on, then the algorithm is said to be differentially private. Hence, the purpose of privacy analysis is to upper bound the probability that any adversary could successfully guess which dataset the model was trained on.In our paper, we instantiate this hypothetical adversary in order to establish lower bounds on the probability that this distinguishing game can be won. We use this adversary to evaluate the importance of the adversary capabilities allowed in the privacy analysis of DP training algorithms.For DP-SGD, the most common method for training neural networks with differential privacy, our lower bounds are tight and match the theoretical upper bound. This implies that in order to prove better upper bounds, it will be necessary to make use of additional assumptions. Fortunately, we find that our attacks are significantly weaker when additional (realistic) restrictions are put in place on the adversary's capabilities. Thus, in the practical setting common to many real-world deployments, there is a gap between our lower bounds and the upper bounds provided by the analysis: differential privacy is conservative and adversaries may not be able to leak as much information as suggested by the theoretical bound.
2021-03-04
Guo, H., Wang, Z., Wang, B., Li, X., Shila, D. M..  2020.  Fooling A Deep-Learning Based Gait Behavioral Biometric System. 2020 IEEE Security and Privacy Workshops (SPW). :221—227.

We leverage deep learning algorithms on various user behavioral information gathered from end-user devices to classify a subject of interest. In spite of the ability of these techniques to counter spoofing threats, they are vulnerable to adversarial learning attacks, where an attacker adds adversarial noise to the input samples to fool the classifier into false acceptance. Recently, a handful of mature techniques like Fast Gradient Sign Method (FGSM) have been proposed to aid white-box attacks, where an attacker has a complete knowledge of the machine learning model. On the contrary, we exploit a black-box attack to a behavioral biometric system based on gait patterns, by using FGSM and training a shadow model that mimics the target system. The attacker has limited knowledge on the target model and no knowledge of the real user being authenticated, but induces a false acceptance in authentication. Our goal is to understand the feasibility of a black-box attack and to what extent FGSM on shadow models would contribute to its success. Our results manifest that the performance of FGSM highly depends on the quality of the shadow model, which is in turn impacted by key factors including the number of queries allowed by the target system in order to train the shadow model. Our experimentation results have revealed strong relationships between the shadow model and FGSM performance, as well as the effect of the number of FGSM iterations used to create an attack instance. These insights also shed light on deep-learning algorithms' model shareability that can be exploited to launch a successful attack.

2021-01-11
Zhang, X., Chandramouli, K., Gabrijelcic, D., Zahariadis, T., Giunta, G..  2020.  Physical Security Detectors for Critical Infrastructures Against New-Age Threat of Drones and Human Intrusion. 2020 IEEE International Conference on Multimedia Expo Workshops (ICMEW). :1—4.

Modern critical infrastructures are increasingly turning into distributed, complex Cyber-Physical systems that need proactive protection and fast restoration to mitigate physical or cyber incidents or attacks. Addressing the need for early stage threat detection against physical intrusion, the paper presents two physical security sensors developed within the DEFENDER project for detecting the intrusion of drones and humans using video analytics. The continuous stream of media data obtained from the region of vulnerability and proximity is processed using Region based Fully Connected Neural Network deep-learning model. The novelty of the pro-posed system relies in the processing of multi-threaded media input streams for achieving real-time threat identification. The video analytics solution has been validated using NVIDIA GeForce GTX 1080 for drone detection and NVIDIA GeForce RTX 2070 Max-Q Design for detecting human intruders. The experimental test bed for the validation of the proposed system has been constructed to include environments and situations that are commonly faced by critical infrastructure operators such as the area of protection, tradeoff between angle of coverage against distance of coverage.

2020-09-04
Baek, Ui-Jun, Ji, Se-Hyun, Park, Jee Tae, Lee, Min-Seob, Park, Jun-Sang, Kim, Myung-Sup.  2019.  DDoS Attack Detection on Bitcoin Ecosystem using Deep-Learning. 2019 20th Asia-Pacific Network Operations and Management Symposium (APNOMS). :1—4.
Since Bitcoin, the first cryptocurrency that applied blockchain technology was developed by Satoshi Nakamoto, the cryptocurrency market has grown rapidly. Along with this growth, many vulnerabilities and attacks are threatening the Bitcoin ecosystem, which is not only at the bitcoin network-level but also at the service level that applied it, according to the survey. We intend to analyze and detect DDoS attacks on the premise that bitcoin's network-level data and service-level DDoS attacks with bitcoin are associated. We evaluate the results of the experiment according to the proposed metrics, resulting in an association between network-level data and service-level DDoS attacks of bitcoin. In conclusion, we suggest the possibility that the proposed method could be applied to other blockchain systems.
2020-04-20
Lecuyer, Mathias, Atlidakis, Vaggelis, Geambasu, Roxana, Hsu, Daniel, Jana, Suman.  2019.  Certified Robustness to Adversarial Examples with Differential Privacy. 2019 IEEE Symposium on Security and Privacy (SP). :656–672.
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google's Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
2020-02-18
Nasr, Milad, Shokri, Reza, Houmansadr, Amir.  2019.  Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-Box Inference Attacks against Centralized and Federated Learning. 2019 IEEE Symposium on Security and Privacy (SP). :739–753.

Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models. We measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training. We design inference algorithms for both centralized and federated learning, with respect to passive and active inference attackers, and assuming different adversary prior knowledge. We evaluate our novel white-box membership inference attacks against deep learning algorithms to trace their training data records. We show that a straightforward extension of the known black-box attacks to the white-box setting (through analyzing the outputs of activation functions) is ineffective. We therefore design new algorithms tailored to the white-box setting by exploiting the privacy vulnerabilities of the stochastic gradient descent algorithm, which is the algorithm used to train deep neural networks. We investigate the reasons why deep learning models may leak information about their training data. We then show that even well-generalized models are significantly susceptible to white-box membership inference attacks, by analyzing state-of-the-art pre-trained and publicly available models for the CIFAR dataset. We also show how adversarial participants, in the federated learning setting, can successfully run active membership inference attacks against other participants, even when the global model achieves high prediction accuracies.

2019-08-12
Peixoto, Bruno Malveira, Avila, Sandra, Dias, Zanoni, Rocha, Anderson.  2018.  Breaking Down Violence: A Deep-learning Strategy to Model and Classify Violence in Videos. Proceedings of the 13th International Conference on Availability, Reliability and Security. :50:1–50:7.
Detecting violence in videos through automatic means is significant for law enforcement and analysis of surveillance cameras with the intent of maintaining public safety. Moreover, it may be a great tool for protecting children from accessing inappropriate content and help parents make a better informed decision about what their kids should watch. However, this is a challenging problem since the very definition of violence is broad and highly subjective. Hence, detecting such nuances from videos with no human supervision is not only technical, but also a conceptual problem. With this in mind, we explore how to better describe the idea of violence for a convolutional neural network by breaking it into more objective and concrete parts. Initially, our method uses independent networks to learn features for more specific concepts related to violence, such as fights, explosions, blood, etc. Then we use these features to classify each concept and later fuse them in a meta-classification to describe violence. We also explore how to represent time-based events in still-images as network inputs; since many violent acts are described in terms of movement. We show that using more specific concepts is an intuitive and effective solution, besides being complementary to form a more robust definition of violence. When compared to other methods for violence detection, this approach holds better classification quality while using only automatic features.