Skip to Main Content Area
  • CPS-VO
    • Contact Support
  • Browse
    • Calendar
    • Announcements
    • Repositories
    • Groups
  • Search
    • Search for Content
    • Search for a Group
    • Search for People
    • Search for a Project
    • Tagcloud
      
 
Not a member?
Click here to register!
Forgot username or password?
 
Home
National Science Foundation

Cyber-Physical Systems Virtual Organization

Read-only archive of site from September 29, 2023.

CPS-VO

confidentiality protection

biblio

Visible to the public PRADA: Protecting Against DNN Model Stealing Attacks

Submitted by aekwall on Mon, 08/03/2020 - 10:38am
  • nontargeted adversarial examples
  • Adversarial Machine Learning
  • API queries
  • confidentiality protection
  • DNN model extraction attacks
  • DNN model stealing attacks
  • machine learning applications
  • ML models
  • model extraction attacks
  • model stealing
  • model extraction
  • PRADA
  • prediction accuracy
  • prediction API
  • prior model extraction attacks
  • stolen model
  • transferable adversarial examples
  • well-defined prediction APIs
  • Adversary Models
  • Neural networks
  • Scalability
  • learning (artificial intelligence)
  • Resiliency
  • Human behavior
  • pubcrawl
  • Computational modeling
  • Metrics
  • neural nets
  • security of data
  • query processing
  • Business
  • Training
  • Mathematical model
  • Data mining
  • Predictive models
  • Deep Neural Network
  • application program interfaces

Terms of Use  |  ©2023. CPS-VO