Skip to Main Content Area
CPS-VO
Contact Support
Browse
Calendar
Announcements
Repositories
Groups
Search
Search for Content
Search for a Group
Search for People
Search for a Project
Tagcloud
› Go to login screen
Not a member?
Click here to register!
Forgot username or password?
Cyber-Physical Systems Virtual Organization
Read-only archive of site from September 29, 2023.
CPS-VO
computer vision tasks
biblio
Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation
Submitted by aekwall on Tue, 03/09/2021 - 12:04pm
deep convolutional neural network
black-box models
adversarial attack perturbation
attacking success rate
black-box adversarial attack
black-box adversarial samples
black-box CNN models
classification mechanism
compact student model
adversarial samples
DenseNet121
knowledge distillation
multiple CNN teacher models
ResNet18
substitute model
substitute model generation
white-box attacking methods
convolutional neural networks
learning (artificial intelligence)
Resiliency
pubcrawl
composability
Computational modeling
Metrics
Training
convolutional neural nets
Task Analysis
black box encryption
image classification
Predictive models
computer vision
Perturbation methods
Approximation algorithms
computer vision tasks
biblio
An Adversarial Perturbation Approach Against CNN-based Soft Biometrics Detection
Submitted by aekwall on Mon, 10/12/2020 - 11:33am
Privacy Threats
Expert Systems and Privacy
unwanted soft biometrics-based identification
subject ethnicity
keystroke dynamics
Gender
daily life consumer electronics
computer vision tasks
CNN-based soft biometrics detection
biometric-based authentication systems
biometric approaches
adversarial stickers
adversarial perturbation approach
authentication systems
sensitive information
Perturbation methods
security of data
computer vision
Human Factors
biometrics (access control)
Data processing
authentication
convolutional neural nets
Neural networks
privacy
deep learning
pubcrawl
Human behavior
learning (artificial intelligence)
data privacy
Scalability
biblio
Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection
Submitted by grigby1 on Fri, 06/19/2020 - 11:49am
machine learning
Training
Sensitivity
Scalability
Resiliency
resilience
pubcrawl
Policy-Governed Secure Collaboration
policy-based governance
Perturbation methods
noninvasive universal perturbation attack
Neural networks
natural scenes
natural images
MobileNet
adversarial examples
learning (artificial intelligence)
layer directed discriminative noise
false trust
false positive rate
dominant layers
distortion
discriminative noise injection strategy
deep neural networks
deep learning
convolutional neural nets
computer vision tasks
computer vision
computer architecture
adversarial images