Skip to Main Content Area
CPS-VO
Contact Support
Browse
Calendar
Announcements
Repositories
Groups
Search
Search for Content
Search for a Group
Search for People
Search for a Project
Tagcloud
› Go to login screen
Not a member?
Click here to register!
Forgot username or password?
Cyber-Physical Systems Virtual Organization
Read-only archive of site from September 29, 2023.
CPS-VO
Perturbation methods
biblio
Error Bounds and Guidelines for Privacy Calibration in Differentially Private Kalman Filtering
Submitted by grigby1 on Wed, 06/02/2021 - 12:19pm
Control Theory
Human behavior
Kalman filters
Perturbation methods
privacy
pubcrawl
resilience
Resiliency
Scalability
Trajectory
biblio
Encryption Inspired Adversarial Defense For Visual Classification
Submitted by grigby1 on Thu, 05/20/2021 - 11:55am
encryption
Training
machine learning
pubcrawl
Metrics
resilience
Resiliency
composability
Perturbation methods
computer vision
Transforms
Adversarial Machine Learning
adversarial defense
perceptual image encryption
white box cryptography
biblio
Attribution Based Approach for Adversarial Example Generation
Submitted by aekwall on Thu, 05/13/2021 - 11:50am
attribution
Metrics
composability
Classification algorithms
deep architecture
gradient methods
Human behavior
Iterative algorithms
Neural networks
Perturbation methods
pubcrawl
Systematics
biblio
Attribution in Scale and Space
Submitted by aekwall on Thu, 05/13/2021 - 11:48am
Medical services
Task Analysis
Kernel
Human behavior
pubcrawl
composability
Google
Metrics
Mathematical model
attribution
Two dimensional displays
Perturbation methods
biblio
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Submitted by aekwall on Tue, 04/27/2021 - 1:10pm
Biological neural networks
Spiking Neural Networks
SNN
adversarial examples
belief networks
Deep Neural Network
Perturbation methods
DNN
Vulnerability
attack
image recognition
security
resilience
Neural networks
Training
Neurons
Robustness
machine learning
pubcrawl
Resiliency
cyber-physical systems
biblio
Physical Adversarial Attacks Against Deep Learning Based Channel Decoding Systems
Submitted by aekwall on Mon, 03/15/2021 - 11:57am
Jamming
wireless security
telecommunication security
Resiliency
pubcrawl
private key cryptography
physical white-box
physical adversarial attacks
Perturbation methods
Noise measurement
Neural networks
modulation
Metrics
learning (artificial intelligence)
white box cryptography
huge success
deep learning channel
deep learning
Decoding
conventional jamming attacks
composability
classical decoding schemes
channel decoding systems
channel decoding
channel coding
black-box adversarial attacks
Artificial Neural Networks
adversarial attacks
biblio
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks
Submitted by aekwall on Tue, 03/09/2021 - 12:05pm
image classification
Robustness
Resiliency
query processing
queries
pubcrawl
Perturbation methods
pattern classification
optimisation
Neural networks
natural image classifiers
minimal perturbation
Metrics
Measurement
mean curvature
Iterative methods
adversarial examples
geometric framework
gaussian distribution
estimation
effective iterative algorithm
Deep Networks
decision boundary
data samples
Covariance matrices
composability
carefully perturbed images
black-box settings
black-box perturbations
black-box attack algorithm
black-box adversarial attacks
black box encryption
biblio
Substitute Model Generation for Black-Box Adversarial Attack Based on Knowledge Distillation
Submitted by aekwall on Tue, 03/09/2021 - 12:04pm
deep convolutional neural network
black-box models
adversarial attack perturbation
attacking success rate
black-box adversarial attack
black-box adversarial samples
black-box CNN models
classification mechanism
compact student model
adversarial samples
DenseNet121
knowledge distillation
multiple CNN teacher models
ResNet18
substitute model
substitute model generation
white-box attacking methods
convolutional neural networks
learning (artificial intelligence)
Resiliency
pubcrawl
composability
Computational modeling
Metrics
Training
convolutional neural nets
Task Analysis
black box encryption
image classification
Predictive models
computer vision
Perturbation methods
Approximation algorithms
computer vision tasks
biblio
Evading Deepfake-Image Detectors with White- and Black-Box Attacks
Submitted by grigby1 on Thu, 03/04/2021 - 2:35pm
security of data
neural nets
neural network
optimization
Perturbation methods
popular forensic approach
pubcrawl
resilience
Resiliency
Robustness
Metrics
significant vulnerabilities
social networking (online)
state- of-the-art classifier
synthesizer
synthetic content
synthetically-generated content
target classifier
Training
Twitter
fraudulent social media profiles
white box
security
attack case studies
AUC
black-box attack
composability
deepfake-image detectors
disinformation campaigns
Forensics
White Box Security
Generators
image area
image classification
Image forensics
image generators
image representation
image sensors
image-forensic classifiers
learning (artificial intelligence)
biblio
Defending Against Model Stealing Attacks With Adaptive Misinformation
Submitted by grigby1 on Thu, 01/28/2021 - 1:12pm
Metrics
training dataset
security of data
security
Scalability
Resiliency
resilience
query processing
pubcrawl
Predictive models
Perturbation methods
out-of-distribution inputs
OOD queries
neural nets
model stealing attacks
Adaptation models
learning (artificial intelligence)
labeled dataset
Human behavior
deep neural networks
Data models
Computational modeling
Cloning
clone model
black-box query access
attacker clone model
attacker
Adversary Models
Adaptive Misinformation
« first
‹ previous
…
2
3
4
5
6
7
8
9
10
next ›
last »