Skip to Main Content Area
CPS-VO
Contact Support
Browse
Calendar
Announcements
Repositories
Groups
Search
Search for Content
Search for a Group
Search for People
Search for a Project
Tagcloud
› Go to login screen
Not a member?
Click here to register!
Forgot username or password?
Cyber-Physical Systems Virtual Organization
Read-only archive of site from September 29, 2023.
CPS-VO
deep neural networks
biblio
Denoising and Verification Cross-Layer Ensemble Against Black-box Adversarial Attacks
Submitted by aekwall on Mon, 09/21/2020 - 3:36pm
adversarial inputs
Cross Layer Security
verification cross-layer ensemble
unsupervised model
supervised model verification ensemble
representative attacks
noise reduction
MODEF
Manifolds
ensemble diversity
ensemble defense
defense-attack arms race
defense success rates
cross-layer model diversity ensemble framework
black-box adversarial attacks
benign inputs
security of data
adversarial deep learning
composability
DNNs
adversarial examples
machine learning tasks
deep neural networks
Predictive models
testing
Training
Neural networks
neural nets
Robustness
pubcrawl
Resiliency
learning (artificial intelligence)
biblio
A Black-Box Approach to Generate Adversarial Examples Against Deep Neural Networks for High Dimensional Input
Submitted by grigby1 on Fri, 09/04/2020 - 4:11pm
linear regression model
black-box setting
CNNs
data science
extensive recent works
generate adversarial examples
generating adversarial samples
high dimensional
image classification
learning models
linear fine-grained search
black-box approach
minimizing noncontinuous function
model parameters
noncontinuous step function problem
numerous advanced image classifiers
queries
white-box setting
Zeroth order
zeroth order optimization algorithm
zeroth-order optimization method
Black Box Security
Cyberspace
query processing
Conferences
optimisation
pubcrawl
composability
Metrics
Resiliency
resilience
learning (artificial intelligence)
neural nets
security of data
machine-to-machine communications
regression analysis
Iterative methods
deep neural networks
face recognition
adversarial perturbations
gradient methods
adversarial examples
approximation theory
biblio
Semi-black-box Attacks Against Speech Recognition Systems Using Adversarial Samples
Submitted by grigby1 on Fri, 09/04/2020 - 4:10pm
automatic speech recognition systems
Black Box Security
targeted ASR systems
semiblack-box attack
semi-black-box attacks
security vulnerabilities
Kaldi
high attack success rate
gradient-independent genetic algorithm
gradient descent algorithm
adversary-expected transcript texts
adversarial samples
adversarial attacks
white-box attacks
Speech recognition
gradient methods
security of data
Deep Neural Network
Perturbation methods
deep neural networks
Statistics
Sociology
genetic algorithms
neural nets
resilience
Resiliency
Metrics
composability
pubcrawl
Hidden Markov models
Computational modeling
biblio
Targeted Adversarial Examples for Black Box Audio Systems
Submitted by grigby1 on Fri, 09/04/2020 - 4:10pm
adversarial perturbations
Black Box Security
white-box attacks
speech-to-text
Speech recognition
gradient methods
gradient estimation
fooling ASR systems
estimation
deep recurrent networks
black-box
black box audio systems
automatic speech recognition systems
audio transcription
audio systems
adversarial generation
security of data
Approximation algorithms
recurrent neural nets
adversarial attack
deep neural networks
Statistics
Sociology
genetic algorithms
Decoding
resilience
Resiliency
Metrics
composability
pubcrawl
Task Analysis
biblio
Symbolic Execution for Attribution and Attack Synthesis in Neural Networks
Submitted by grigby1 on Fri, 08/28/2020 - 12:22pm
DNN validation
Symbolic Execution
pubcrawl
program analysis
neural nets
Metrics
Importance Analysis
Image resolution
image classification
Human behavior
adversarial attacks
DNN
DeepCheck lightweight symbolic analysis
deep neural networks
core ideas
composability
attribution
attack synthesis
adversarial generation
biblio
DeepAttest: An End-to-End Attestation Framework for Deep Neural Networks
Submitted by aekwall on Mon, 08/17/2020 - 11:36am
on-device DNN attestation method
hardware-software codesign
Human behavior
industrial property
intelligent devices
intelligent platforms
IP concern
ip protection
learning (artificial intelligence)
manufactured hardware
neural nets
hardware-level intellectual property
pubcrawl
queried DNN
Resiliency
Software/Hardware Codesign
target platform
TEE-supported platforms
Trusted Execution Environment
unregulated usage
usage control
device provider
application usage
attestation
attestation criterion
authorisation
authorized DNN programs
composability
deep learning frameworks
deep neural networks
DeepAttest overhead
DeepAttest provisions
application program interfaces
device-specific fingerprint
DNN applications
DNN benchmarks
DNN program
embedded fingerprint
end-to-end attestation framework
FP
hardware architectures
hardware-bounded IP protection impair
biblio
Privacy Preserving Big Data Publication On Cloud Using Mondrian Anonymization Techniques and Deep Neural Networks
Submitted by aekwall on Mon, 07/13/2020 - 11:07am
privacy
k-anonymity
machine learning
Mondrian anonymization techniques
Mondrian based k-anonymity approach
neural nets
Neural networks
personal data
personally identifiable information
predominant factor
high-dimensional data deep neural network based framework
privacy breach
privacy preservation
privacy preserving big data publication
privacy-preservation
protection
Resiliency
security
user privacy in the cloud
data analysis
pubcrawl
Human Factors
resilience
Scalability
Metrics
Big Data
Big Data Analytics
Cloud Computing
compromising privacy
big data privacy
data management
Data models
data privacy
data utility
Databases
deep neural networks
differential privacy
DNN
biblio
Preserving Privacy in Convolutional Neural Network: An ∊-tuple Differential Privacy Approach
Submitted by aekwall on Mon, 06/22/2020 - 11:20am
Deep Neural Network
ϵ-tuple differential privacy approach
Training data
significant accuracy degradation
salient data features
reusable output model
privacy preserving model
privacy concern
model inversion attack
model buildup data
financial data
deep neural networks
complex data features
medical data
CNN model
differential privacy
Cloud Computing
transfer learning
image recognition
convolutional neural network
convolutional neural nets
CNN
privacy
classification
composability
pubcrawl
Human behavior
Resiliency
learning (artificial intelligence)
data privacy
Scalability
biblio
Detecting Adversarial Examples for Deep Neural Networks via Layer Directed Discriminative Noise Injection
Submitted by grigby1 on Fri, 06/19/2020 - 11:49am
machine learning
Training
Sensitivity
Scalability
Resiliency
resilience
pubcrawl
Policy-Governed Secure Collaboration
policy-based governance
Perturbation methods
noninvasive universal perturbation attack
Neural networks
natural scenes
natural images
MobileNet
adversarial examples
learning (artificial intelligence)
layer directed discriminative noise
false trust
false positive rate
dominant layers
distortion
discriminative noise injection strategy
deep neural networks
deep learning
convolutional neural nets
computer vision tasks
computer vision
computer architecture
adversarial images
biblio
Conditional Generative Adversarial Network on Semi-supervised Learning Task
Submitted by grigby1 on Fri, 06/12/2020 - 12:21pm
Mathematical model
Tensile stress
supervised learning
semisupervised learning method
Semisupervised learning
semi-supervised
Scalability
Resiliency
resilience
pubcrawl
neural nets
MNIST dataset
Metrics
abundant unlabeled data
image classification
Generators
generative adversarial networks
generative adversarial network
Generative Adversarial Learning
Gallium nitride
deep neural networks
Data models
conditional generative adversarial network
conditional GAN model
conditional
« first
‹ previous
1
2
3
4
5
6
7
next ›
last »