Visible to the public Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSimConflict Detection Enabled

TitleIntegrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim
Publication TypeConference Paper
Year of Publication2019
AuthorsFarzad Farshchi, Qijing Huang, Heechul Yun
Conference NameWorkshop on Energy Efficient Machine Learning and Cognitive Computing for Embedded Applications
Date PublishedFebruary 2019
PublisherIEEE
Conference LocationWashington, DC, USA
ISBN Number978-1-7281-6763-3
Accession Number19432391
Keywords2019: April, KU, Resilient Architectures, Side-Channel Attack Resistance
Abstract

NVDLA is an open-source deep neural network (DNN) accelerator which has received a lot of attention by the community since its introduction by Nvidia. It is a full-featured hardware IP and can serve as a good reference for conducting research and development of SoCs with integrated accelerators. However, an expensive FPGA board is required to do experiments with this IP in a real SoC. Moreover, since NVDLA is clocked at a lower frequency on an FPGA, it would be hard to do accurate performance analysis with such a setup. To overcome these limitations, we integrate NVDLA into a real RISC-V SoC on the Amazon could FPGA using FireSim, a cycle-exact FPGA-accelerated simulator. We then evaluate the performance of NVDLA by running YOLOv3 object-detection algorithm. Our results show that NVDLA can sustain 7.5 fps when running YOLOv3. We further analyze the performance by showing that sharing the last-level cache with NVDLA can result in up to 1.56x speedup. We then identify that sharing the memory system with the accelerator can result in unpredictable execution time for the real-time tasks running on this platform. We believe this is an important issue that must be addressed in order for on-chip DNN accelerators to be incorporated in real-time embedded systems.

URLhttps://ieeexplore.ieee.org/document/9027215
DOI10.1109/EMC249363.2019.00012
Citation Keyunknown