Visible to the public Biblio

Filters: Author is Bastian, Nathaniel D.  [Clear All Filters]
2023-02-24
Abdelzaher, Tarek, Bastian, Nathaniel D., Jha, Susmit, Kaplan, Lance, Srivastava, Mani, Veeravalli, Venugopal V..  2022.  Context-aware Collaborative Neuro-Symbolic Inference in IoBTs. MILCOM 2022 - 2022 IEEE Military Communications Conference (MILCOM). :1053—1058.
IoBTs must feature collaborative, context-aware, multi-modal fusion for real-time, robust decision-making in adversarial environments. The integration of machine learning (ML) models into IoBTs has been successful at solving these problems at a small scale (e.g., AiTR), but state-of-the-art ML models grow exponentially with increasing temporal and spatial scale of modeled phenomena, and can thus become brittle, untrustworthy, and vulnerable when interpreting large-scale tactical edge data. To address this challenge, we need to develop principles and methodologies for uncertainty-quantified neuro-symbolic ML, where learning and inference exploit symbolic knowledge and reasoning, in addition to, multi-modal and multi-vantage sensor data. The approach features integrated neuro-symbolic inference, where symbolic context is used by deep learning, and deep learning models provide atomic concepts for symbolic reasoning. The incorporation of high-level symbolic reasoning improves data efficiency during training and makes inference more robust, interpretable, and resource-efficient. In this paper, we identify the key challenges in developing context-aware collaborative neuro-symbolic inference in IoBTs and review some recent progress in addressing these gaps.
2022-06-14
Schneider, Madeleine, Aspinall, David, Bastian, Nathaniel D..  2021.  Evaluating Model Robustness to Adversarial Samples in Network Intrusion Detection. 2021 IEEE International Conference on Big Data (Big Data). :3343–3352.
Adversarial machine learning, a technique which seeks to deceive machine learning (ML) models, threatens the utility and reliability of ML systems. This is particularly relevant in critical ML implementations such as those found in Network Intrusion Detection Systems (NIDS). This paper considers the impact of adversarial influence on NIDS and proposes ways to improve ML based systems. Specifically, we consider five feature robustness metrics to determine which features in a model are most vulnerable, and four defense methods. These methods are tested on six ML models with four adversarial sample generation techniques. Our results show that across different models and adversarial generation techniques, there is limited consistency in vulnerable features or in effectiveness of defense method.
2022-06-09
Cobb, Adam D., Jalaian, Brian A., Bastian, Nathaniel D., Russell, Stephen.  2021.  Robust Decision-Making in the Internet of Battlefield Things Using Bayesian Neural Networks. 2021 Winter Simulation Conference (WSC). :1–12.
The Internet of Battlefield Things (IoBT) is a dynamically composed network of intelligent sensors and actuators that operate as a command and control, communications, computers, and intelligence complex-system with the aim to enable multi-domain operations. The use of artificial intelligence can help transform the IoBT data into actionable insight to create information and decision advantage on the battlefield. In this work, we focus on how accounting for uncertainty in IoBT systems can result in more robust and safer systems. Human trust in these systems requires the ability to understand and interpret how machines make decisions. Most real-world applications currently use deterministic machine learning techniques that cannot incorporate uncertainty. In this work, we focus on the machine learning task of classifying vehicles from their audio recordings, comparing deterministic convolutional neural networks (CNNs) with Bayesian CNNs to show that correctly estimating the uncertainty can help lead to robust decision-making in IoBT.