Visible to the public Enabling Multimodal Sensing, Real-time Onboard Detection and Adaptive Control for Fully Autonomous Unmanned Aerial Systems

The goal of this proposed research project is to achieve true onboard autonomy in real time for small UAVs in the absence of remote control and external navigation aids. Three major areas have been explored. In the area of UAV flight control, an automatic trajectory generation framework is developed. It consists of waypoint planning at upper level and LQR based trajectory generation in the lower level. The Deep reinforcement learning based framework reduces the control trust by more than 15% with much less computing complexity compared to state-of-the-art approaches. In area of obstacle sensing, 3D object detection/classification using Capsule Networks is investigated. Compared to other existing approaches, it achieves higher accuracy with less training samples. Finally, in the area of system integration, an FPGA based implementation of Yolo is developed. The model is compressed using block-circulant weight matrix. Compared to GTX 1070 GPU, it achieves similar throughput but 6x higher energy efficiency. Compared to TX2, it improves throughput and energy efficiency by 7x and 3.5x respectively.

License: 
Creative Commons 2.5
Switch to experimental viewer