Skip to Main Content Area
CPS-VO
Contact Support
Browse
Calendar
Announcements
Repositories
Groups
Search
Search for Content
Search for a Group
Search for People
Search for a Project
Tagcloud
› Go to login screen
Not a member?
Click here to register!
Forgot username or password?
Cyber-Physical Systems Virtual Organization
Read-only archive of site from September 29, 2023.
CPS-VO
adversarial examples
biblio
NMI-FGSM-Tri: An Efficient and Targeted Method for Generating Adversarial Examples for Speaker Recognition
Submitted by aekwall on Fri, 03/31/2023 - 9:36am
Neural networks
deep learning
pubcrawl
Metrics
Resiliency
composability
Black Box Attacks
black-box attack
adversarial examples
Design methodology
Cyberspace
data science
Target recognition
transferability
speaker recognition
biblio
Towards Black-Box Adversarial Attacks on Interpretable Deep Learning Systems
Submitted by aekwall on Tue, 12/20/2022 - 5:20pm
security
Neural networks
deep learning
pubcrawl
Metrics
Resiliency
composability
black-box attacks
adversarial examples
Multimedia systems
White Box Security
Interpretable deep learning systems
biblio
The Vulnerability of the Neural Networks Against Adversarial Examples in Deep Learning Algorithms
Submitted by aekwall on Mon, 01/31/2022 - 4:11pm
security
Neural networks
deep learning
pubcrawl
Metrics
Resiliency
composability
Classification algorithms
computer vision
natural language processing
data science
adversarial examples
black box
white box
White Box Security
biblio
A New Black Box Attack Generating Adversarial Examples Based on Reinforcement Learning
Submitted by aekwall on Tue, 07/27/2021 - 1:58pm
Reinforcement learning
Gallium nitride
Deep Neural Network
black box attack
adversarial examples
adver-sarial reinforcement learning
generative adversarial networks
Black Box Attacks
composability
Resiliency
Metrics
pubcrawl
Training
Data models
Computational modeling
Neural networks
biblio
Is Spiking Secure? A Comparative Study on the Security Vulnerabilities of Spiking and Deep Neural Networks
Submitted by aekwall on Tue, 04/27/2021 - 1:10pm
Biological neural networks
Spiking Neural Networks
SNN
adversarial examples
belief networks
Deep Neural Network
Perturbation methods
DNN
Vulnerability
attack
image recognition
security
resilience
Neural networks
Training
Neurons
Robustness
machine learning
pubcrawl
Resiliency
cyber-physical systems
biblio
GeoDA: A Geometric Framework for Black-Box Adversarial Attacks
Submitted by aekwall on Tue, 03/09/2021 - 12:05pm
image classification
Robustness
Resiliency
query processing
queries
pubcrawl
Perturbation methods
pattern classification
optimisation
Neural networks
natural image classifiers
minimal perturbation
Metrics
Measurement
mean curvature
Iterative methods
adversarial examples
geometric framework
gaussian distribution
estimation
effective iterative algorithm
Deep Networks
decision boundary
data samples
Covariance matrices
composability
carefully perturbed images
black-box settings
black-box perturbations
black-box attack algorithm
black-box adversarial attacks
black box encryption
biblio
Fidelity: Towards Measuring the Trustworthiness of Neural Network Classification
Submitted by aekwall on Mon, 12/07/2020 - 12:32pm
pattern classification
security-critical tasks
neural network system
neural network classification
adversarial settings
adversarial attack detection
adversarial examples
Perturbation methods
trustworthiness
machine learning model
machine learning
security of data
Neural networks
neural nets
Statistics
Sociology
Computational modeling
composability
pubcrawl
learning (artificial intelligence)
Trusted Computing
Task Analysis
biblio
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks
Submitted by aekwall on Mon, 09/21/2020 - 3:36pm
adversarial inputs
Cross Layer Security
verification cross-layer ensemble
unsupervised model
supervised model verification ensemble
representative attacks
noise reduction
MODEF
Manifolds
ensemble diversity
ensemble defense
defense-attack arms race
defense success rates
cross-layer model diversity ensemble framework
black-box adversarial attacks
benign inputs
security of data
adversarial deep learning
composability
DNNs
adversarial examples
machine learning tasks
deep neural networks
Predictive models
testing
Training
Neural networks
neural nets
Robustness
pubcrawl
Resiliency
learning (artificial intelligence)
biblio
Creation of Adversarial Examples with Keeping High Visual Performance
Submitted by grigby1 on Fri, 09/11/2020 - 11:46am
visualization
intelligence
image recognition technology
human readability
high visual performance
FGSM
convolutional neural network (CNN)
character string CAPTCHA
character recognition
character images
artificial
CAPTCHA
captchas
image recognition
convolutional neural network
learning (artificial intelligence)
adversarial examples
image classification
Resistance
Perturbation methods
composability
Mathematical model
security
Human behavior
pubcrawl
Neural networks
convolutional neural nets
CNN
machine learning
biblio
A Black-Box Approach to Generate Adversarial Examples Against Deep Neural Networks for High Dimensional Input
Submitted by grigby1 on Fri, 09/04/2020 - 4:11pm
linear regression model
black-box setting
CNNs
data science
extensive recent works
generate adversarial examples
generating adversarial samples
high dimensional
image classification
learning models
linear fine-grained search
black-box approach
minimizing noncontinuous function
model parameters
noncontinuous step function problem
numerous advanced image classifiers
queries
white-box setting
Zeroth order
zeroth order optimization algorithm
zeroth-order optimization method
Black Box Security
Cyberspace
query processing
Conferences
optimisation
pubcrawl
composability
Metrics
Resiliency
resilience
learning (artificial intelligence)
neural nets
security of data
machine-to-machine communications
regression analysis
Iterative methods
deep neural networks
face recognition
adversarial perturbations
gradient methods
adversarial examples
approximation theory
1
2
3
next ›
last »