Skip to Main Content Area
CPS-VO
Contact Support
Browse
Calendar
Announcements
Repositories
Groups
Search
Search for Content
Search for a Group
Search for People
Search for a Project
Tagcloud
› Go to login screen
Not a member?
Click here to register!
Forgot username or password?
Cyber-Physical Systems Virtual Organization
Read-only archive of site from September 29, 2023.
CPS-VO
Neural networks
biblio
Black-box Adversarial Machine Learning Attack on Network Traffic Classification
Submitted by grigby1 on Fri, 09/04/2020 - 4:10pm
machine learning
Training
telecommunication traffic
telecommunication computing
Support vector machines
security threat
security
Resiliency
resilience
pubcrawl
Perturbation methods
pattern classification
Neural networks
Network traffic classification
Metrics
Adversarial Machine Learning
learning (artificial intelligence)
deep machine learning-based classifiers
deep machine learning techniques
deep machine learning models
Data models
computer network security
composability
black-box adversarial machine
black-box adversarial attack
Black Box Security
autonomous networks
adversarial threats
adversarial perturbations
biblio
Black Box Explanation Guided Decision-Based Adversarial Attacks
Submitted by grigby1 on Fri, 09/04/2020 - 4:10pm
Training data
Black Box Security
targeted deep neural networks
performing decision-based black-box attacks
imperceptive adversarial perturbation
imperceptible adversarial example
derivative-free and constraint optimization problem
decision-based black-box adversarial attack
decision-based adversarial attacks
Constraint optimization
boundary attack
black box explanation guided decision-based adversarial attacks
black box explanation
attack efficiency
artificial intelligence security
Logistics
Cryptography
Perturbation methods
neural nets
Neural networks
learning (artificial intelligence)
resilience
Resiliency
Metrics
composability
pubcrawl
search problems
Artificial Intelligence
optimisation
telecommunication security
Cats
Computational modeling
biblio
Countering Malware Via Decoy Processes with Improved Resource Utilization Consistency
Submitted by grigby1 on Fri, 09/04/2020 - 3:16pm
malware
BIOS Security
resource utilization consistency
Probes
heatmap training mechanism
Heating systems
flow graphs
defensive deception
decoy processes
decoy process
control flow graphs
neural network
Human Factors
Scalability
neural nets
Neural networks
learning (artificial intelligence)
Training
resilience
Resiliency
Metrics
pubcrawl
machine learning
resource management
resource allocation
invasive software
biblio
Suspicious Network Event Recognition Using Modified Stacking Ensemble Machine Learning
Submitted by grigby1 on Fri, 08/28/2020 - 3:34pm
extremely randomised trees
artificial intelligence-oriented automatic services
big-data analytics
cyber-threats
Data preprocessing
data science
Ensemble Learning
ensemble learning.
exploratory data analysis
AdaBoost
feature creation
modified stacking ensemble machine learning
Network Event Log Analytics
network intrusions
network traffic alerts
suspicious network event recognition dataset
suspicious network events
big data security in the cloud
neural nets
machine learning
data analysis
pubcrawl
Resiliency
Conferences
random forests
Big Data
pattern classification
security of data
Neural networks
Metrics
Random Forest
resilience
Scalability
Feature Selection
2019 IEEE BigData Cup Challenge
biblio
Style-Aware Neural Model with Application in Authorship Attribution
Submitted by grigby1 on Fri, 08/28/2020 - 12:22pm
Neural networks
writing style
Training
text analysis
syntax encoding
syntax
Syntactics
syntactic structure
syntactic representations
syntactic representation
stylometry
stylistic levels
style-aware neural model
semantic structure
pubcrawl
part of speech tags
attention-based hierarchical neural network
neural nets
neural model
natural language processing
Metrics
lexical representations
Human behavior
encoding
document information
Computational modeling
composability
Blogs
Benchmark testing
benchmark datasets
authorship attribution
attribution
biblio
Using Temporal Conceptual Graphs and Neural Networks for Big Data-Based Attack Scenarios Reconstruction
Submitted by aekwall on Mon, 08/17/2020 - 11:18am
Elman
temporal conceptual graph
RBF networks
RBF
radial basis function networks
probable attack scenario
potential attack scenario
possible attack scenarios
investigation
hybrid neural network
high speed networks
global attack reconstruction process
Elman network
security of data
complex attack scenarios
big data-based attack scenarios reconstruction
Attack Scenario Graph
attack graphs
complex attacks
Predictive Metrics
Neural networks
graph theory
composability
pubcrawl
Resiliency
Big Data
biblio
Node Copying for Protection Against Graph Neural Network Topology Attacks
Submitted by aekwall on Mon, 08/17/2020 - 11:18am
Topology
similarity structure
semi-supervised learning
prediction capability
node copying
graph topology
graph neural network topology attacks
graph convolutional networks
graph based machine
downstream learning task
detection problem
deep learning models
corruption
attack graphs
graph connectivity
Predictive Metrics
security of data
adversarial attacks
network topology
network theory (graphs)
Training
Prediction algorithms
Neural networks
neural nets
Computational modeling
graph theory
composability
pubcrawl
Resiliency
learning (artificial intelligence)
Task Analysis
biblio
Selective Poisoning Attack on Deep Neural Network to Induce Fine-Grained Recognition Error
Submitted by aekwall on Mon, 08/10/2020 - 10:36am
MNIST
Training data
Training
Support vector machines
Speech recognition
selective poisoning attack
security of data
Resiliency
pubcrawl
policy-based governance
poisoning attack
Pattern recognition
nuclear facilities
Neurons
Neural networks
neural nets
AI Poisoning
malicious training data
machine learning library
machine learning
learning (artificial intelligence)
image recognition
fine-grained recognition error
DNN training process
DNN security
distortion
Deep Neural Network
Data models
cyber physical systems
CIFAR10
chosen class
Artificial Neural Networks
biblio
PRADA: Protecting Against DNN Model Stealing Attacks
Submitted by aekwall on Mon, 08/03/2020 - 10:38am
nontargeted adversarial examples
Adversarial Machine Learning
API queries
confidentiality protection
DNN model extraction attacks
DNN model stealing attacks
machine learning applications
ML models
model extraction attacks
model stealing
model extraction
PRADA
prediction accuracy
prediction API
prior model extraction attacks
stolen model
transferable adversarial examples
well-defined prediction APIs
Adversary Models
Neural networks
Scalability
learning (artificial intelligence)
Resiliency
Human behavior
pubcrawl
Computational modeling
Metrics
neural nets
security of data
query processing
Business
Training
Mathematical model
Data mining
Predictive models
Deep Neural Network
application program interfaces
biblio
Attacks on Digital Watermarks for Deep Neural Networks
Submitted by grigby1 on Thu, 07/30/2020 - 1:54pm
deep learning models
watermark
statistical distribution
model prediction
Mobile app
intellectual property theft
fast response times.
Digital Watermarks
deep neural networks training
Deep Neural Network
At-tack
ip protection
copy protection
Watermarking
learning (artificial intelligence)
detection algorithms
neural nets
composability
standards
Computational modeling
industrial property
Mathematical model
Resiliency
resilience
policy-based governance
pubcrawl
Neural networks
Training
« first
‹ previous
…
28
29
30
31
32
33
34
35
36
…
next ›
last »