Fine Tuning Lasso in an Adversarial Environment against Gradient Attacks
Title | Fine Tuning Lasso in an Adversarial Environment against Gradient Attacks |
Publication Type | Conference Paper |
Year of Publication | 2017 |
Authors | Ditzler, G., Prater, A. |
Conference Name | 2017 IEEE Symposium Series on Computational Intelligence (SSCI) |
Date Published | nov |
Keywords | adversarial component, adversarial data, adversarial environment, adversarial learning research, adversarial learning setting, Adversarial Machine Learning, Adversary Models, convex programming, Data analysis, data mining, data mining algorithms, data set testing, data testing, domain adaptation, domain adaption, feature extraction, feature selection, fine tuning lasso, fixed probability distribution, gradient attacks, Human Behavior, Input variables, known weaknesses, labeled training data, learning (artificial intelligence), Metrics, Optimization, pattern classification, probability, pubcrawl, resilience, Resiliency, robust classifier, Scalability, security of data, single convex optimization, source domain, supervised learning, Synthetic Data, Task Analysis, Testing, Toxicology, Training |
Abstract | Machine learning and data mining algorithms typically assume that the training and testing data are sampled from the same fixed probability distribution; however, this violation is often violated in practice. The field of domain adaptation addresses the situation where this assumption of a fixed probability between the two domains is violated; however, the difference between the two domains (training/source and testing/target) may not be known a priori. There has been a recent thrust in addressing the problem of learning in the presence of an adversary, which we formulate as a problem of domain adaption to build a more robust classifier. This is because the overall security of classifiers and their preprocessing stages have been called into question with the recent findings of adversaries in a learning setting. Adversarial training (and testing) data pose a serious threat to scenarios where an attacker has the opportunity to ``poison'' the training or ``evade'' on the testing data set(s) in order to achieve something that is not in the best interest of the classifier. Recent work has begun to show the impact of adversarial data on several classifiers; however, the impact of the adversary on aspects related to preprocessing of data (i.e., dimensionality reduction or feature selection) has widely been ignored in the revamp of adversarial learning research. Furthermore, variable selection, which is a vital component to any data analysis, has been shown to be particularly susceptible under an attacker that has knowledge of the task. In this work, we explore avenues for learning resilient classification models in the adversarial learning setting by considering the effects of adversarial data and how to mitigate its effects through optimization. Our model forms a single convex optimization problem that uses the labeled training data from the source domain and known- weaknesses of the model for an adversarial component. We benchmark the proposed approach on synthetic data and show the trade-off between classification accuracy and skew-insensitive statistics. |
URL | http://ieeexplore.ieee.org/document/8280924/ |
DOI | 10.1109/SSCI.2017.8280924 |
Citation Key | ditzler_fine_2017 |
- robust classifier
- known weaknesses
- labeled training data
- learning (artificial intelligence)
- Metrics
- optimization
- pattern classification
- probability
- pubcrawl
- resilience
- Resiliency
- Input variables
- Scalability
- security of data
- single convex optimization
- source domain
- supervised learning
- Synthetic Data
- Task Analysis
- testing
- Toxicology
- Training
- data set testing
- adversarial data
- adversarial environment
- adversarial learning research
- adversarial learning setting
- Adversarial Machine Learning
- Adversary Models
- convex programming
- data analysis
- Data mining
- data mining algorithms
- adversarial component
- data testing
- domain adaptation
- domain adaption
- feature extraction
- Feature Selection
- fine tuning lasso
- fixed probability distribution
- gradient attacks
- Human behavior