Skip to Main Content Area
  • CPS-VO
    • Contact Support
  • Browse
    • Calendar
    • Announcements
    • Repositories
    • Groups
  • Search
    • Search for Content
    • Search for a Group
    • Search for People
    • Search for a Project
    • Tagcloud
      
 
Not a member?
Click here to register!
Forgot username or password?
 
Home
National Science Foundation

Cyber-Physical Systems Virtual Organization

Read-only archive of site from September 29, 2023.

CPS-VO

MobileNet

biblio

Visible to the public Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection

Submitted by grigby1 on Fri, 06/19/2020 - 11:49am
  • machine learning
  • Training
  • Sensitivity
  • Scalability
  • Resiliency
  • resilience
  • pubcrawl
  • Policy-Governed Secure Collaboration
  • policy-based governance
  • Perturbation methods
  • noninvasive universal perturbation attack
  • Neural networks
  • natural scenes
  • natural images
  • MobileNet
  • adversarial examples
  • learning (artificial intelligence)
  • layer directed discriminative noise
  • false trust
  • false positive rate
  • dominant layers
  • distortion
  • discriminative noise injection strategy
  • deep neural networks
  • deep learning
  • convolutional neural nets
  • computer vision tasks
  • computer vision
  • computer architecture
  • adversarial images

Terms of Use  |  ©2023. CPS-VO