Visible to the public Biblio

Filters: Author is Raymond, David  [Clear All Filters]
2018-11-19
Mattina, Brendan, Yeung, Franki, Hsu, Alex, Savoy, Dale, Tront, Joseph, Raymond, David.  2017.  MARCS: Mobile Augmented Reality for Cybersecurity. Proceedings of the 12th Annual Conference on Cyber and Information Security Research. :10:1–10:4.

Network analysts have long used two-dimensional security visualizations to make sense of overwhelming amounts of network data. As networks grow larger and more complex, two-dimensional displays can become convoluted, compromising user cyber-threat perspective. Using augmented reality to display data with cyber-physical context creates a naturally intuitive interface that helps restore perspective and comprehension sacrificed by complicated two-dimensional visualizations. We introduce Mobile Augmented Reality for Cybersecurity, or MARCS, as a platform to visualize a diverse array of data in real time and space to improve user perspective and threat response. Early work centers around CovARVT and ConnectAR, two proof of concept, prototype applications designed to visualize intrusion detection and wireless association data, respectively.

2018-06-11
DeYoung, Mark E., Salman, Mohammed, Bedi, Himanshu, Raymond, David, Tront, Joseph G..  2017.  Spark on the ARC: Big Data Analytics Frameworks on HPC Clusters. Proceedings of the Practice and Experience in Advanced Research Computing 2017 on Sustainability, Success and Impact. :34:1–34:6.

In this paper we document our approach to overcoming service discovery and configuration of Apache Hadoop and Spark frameworks with dynamic resource allocations in a batch oriented Advanced Research Computing (ARC) High Performance Computing (HPC) environment. ARC efforts have produced a wide variety of HPC architectures. A common HPC architectural pattern is multi-node compute clusters with low-latency, high-performance interconnect fabrics and shared central storage. This pattern enables processing of workloads with high data co-dependency, frequently solved with message passing interface (MPI) programming models, and then executed as batch jobs. Unfortunately, many HPC programming paradigms are not well suited to big data workloads which are often easily separable. Our approach lowers barriers of entry to HPC environments by enabling end users to utilize Apache Hadoop and Spark frameworks that support big data oriented programming paradigms appropriate for separable workloads in batch oriented HPC environments.