Visible to the public Biblio

Filters: Keyword is adversarial ai  [Clear All Filters]
2019-01-31
Menet, Fran\c cois, Berthier, Paul, Gagnon, Michel, Fernandez, José M..  2018.  Spartan Networks: Self-Feature-Squeezing Networks for Increased Robustness in Adversarial Settings. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security. :2246–2248.

Deep Learning Models are vulnerable to adversarial inputs, samples modified in order to maximize error of the system. We hereby introduce Spartan Networks, Deep Learning models that are inherently more resistant to adverarial examples, without doing any input preprocessing out of the network or adversarial training. These networks have an adversarial layer within the network designed to starve the network of information, using a new activation function to discard data. This layer trains the neural network to filter-out usually-irrelevant parts of its input. These models thus have a slightly lower precision, but report a higher robustness under attack than unprotected models.