Visible to the public Biblio

Filters: Author is Cohen, Daniel  [Clear All Filters]
2023-06-09
Williams, Daniel, Clark, Chelece, McGahan, Rachel, Potteiger, Bradley, Cohen, Daniel, Musau, Patrick.  2022.  Discovery of AI/ML Supply Chain Vulnerabilities within Automotive Cyber-Physical Systems. 2022 IEEE International Conference on Assured Autonomy (ICAA). :93—96.
Steady advancement in Artificial Intelligence (AI) development over recent years has caused AI systems to become more readily adopted across industry and military use-cases globally. As powerful as these algorithms are, there are still gaping questions regarding their security and reliability. Beyond adversarial machine learning, software supply chain vulnerabilities and model backdoor injection exploits are emerging as potential threats to the physical safety of AI reliant CPS such as autonomous vehicles. In this work in progress paper, we introduce the concept of AI supply chain vulnerabilities with a provided proof of concept autonomous exploitation framework. We investigate the viability of algorithm backdoors and software third party library dependencies for applicability into modern AI attack kill chains. We leverage an autonomous vehicle case study for demonstrating the applicability of our offensive methodologies within a realistic AI CPS operating environment.
2020-07-03
Danilchenko, Victor, Theobald, Matthew, Cohen, Daniel.  2019.  Bootstrapping Security Configuration for IoT Devices on Networks with TLS Inspection. 2019 IEEE Globecom Workshops (GC Wkshps). :1—7.

In the modern security-conscious world, Deep Packet Inspection (DPI) proxies are increasingly often used on industrial and enterprise networks to perform TLS unwrapping on all outbound connections. However, enabling TLS unwrapping requires local devices to have the DPI proxy Certificate Authority certificates installed. While for conventional computing devices this is addressed via enterprise management, it's a difficult problem for Internet of Things ("IoT") devices which are generally not under enterprise management, and may not even be capable of it due to their resource-constrained nature. Thus, for typical IoT devices, being installed on a network with DPI requires either manual device configuration or custom DPI proxy configuration, both of which solutions have significant shortcomings. This poses a serious challenge to the deployment of IoT devices on DPI-enabled intranets. The authors propose a solution to this problem: a method of installing on IoT devices the CA certificates for DPI proxy CAs, as well as other security configuration ("security bootstrapping"). The proposed solution respects the DPI policies, while allowing the commissioning of IoT and IIoT devices without the need for additional manual configuration either at device scope or at network scope. This is accomplished by performing the bootstrap operation over unsecured connection, and downloading certificates using TLS validation at application level. The resulting solution is light-weight and secure, yet does not require validation of the DPI proxy's CA certificates in order to perform the security bootstrapping, thus avoiding the chicken-and-egg problem inherent in using TLS on DPI-enabled intranets.

2019-05-01
Cohen, Daniel, Mitra, Bhaskar, Hofmann, Katja, Croft, W. Bruce.  2018.  Cross Domain Regularization for Neural Ranking Models Using Adversarial Learning. The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. :1025-1028.

Unlike traditional learning to rank models that depend on hand-crafted features, neural representation learning models learn higher level features for the ranking task by training on large datasets. Their ability to learn new features directly from the data, however, may come at a price. Without any special supervision, these models learn relationships that may hold only in the domain from which the training data is sampled, and generalize poorly to domains not observed during training. We study the effectiveness of adversarial learning as a cross domain regularizer in the context of the ranking task. We use an adversarial discriminator and train our neural ranking model on a small set of domains. The discriminator provides a negative feedback signal to discourage the model from learning domain specific representations. Our experiments show consistently better performance on held out domains in the presence of the adversarial discriminator–-sometimes up to 30% on precision\$@1\$.