Anytime Visual Scene Understanding for Heterogeneous and Distributed CPS
Abstract: Despite many advances in vehicle automation, much remains to be done: the best autonomous vehicle today still lags behind human drivers, and connected vehicle (V2V) and infrastructure (V2I) standards are only just emerging. In order for such cyber--physical systems to fully realize their potential, they must be capable of exploiting one of the richest and most complex abilities of humans, which we take for granted: seeing and understanding the visual world. If automated vehicles had this ability, they could drive more intelligently, and share information about road and environment conditions, events, and anomalies to improve situational awareness and safety for other automated vehicles as well as human drivers. That is the goal of this project, to achieve a synergy between computer vision, machine learning and cyber--physical systems that leads to a safer, cheaper and smarter transportation sector, and which has potential applications to other sectors including agriculture, food quality control and environment monitoring.
To achieve this goal, this project brings together expertise in computer vision, sensing, embedded computing, machine learning, big data analytics and sensor networks to develop an integrated edge--cloud architecture for (1) "anytime scene understanding" to unify diverse scene understanding methods in computer vision, and (2) "cooperative scene understanding" that leverages vehicle--to--vehicle and vehicle--to--infrastructure protocols to coordinate with multiple systems, while (3) emphasizing how security and privacy should be managed at scale without impacting overall quality--of--service. This architecture can be used for autonomous driving and driver--assist systems, and can be embedded within infrastructure
(digital signs, traffic lights) to avoid traffic congestion, reduce risk of pile--ups and improve situational awareness. Validation and transition of the research to practice are through integration within City of Pittsburgh public works department vehicles, Carnegie Mellon University NAVLAB autonomous vehicles, and across the smart road infrastructure corridor under development in Pittsburgh. The project also includes activities to foster development of a new cyber--physical systems workforce, though involvement of students in the research, co--taught multi--disciplinary courses, and co--organized workshops.
Explanation of Demonstration: A vehicle on a road or a robot in the field does not need a full-blown 3D depth sensor to detect potential collisions or monitor its blind spot. Instead, it needs to only monitor if any object comes within its near proximity, which is an easier task than full depth scanning. We will demonstrate a novel device that monitors the presence of objects on a virtual shell near the device, which we refer to as a light curtain. This interactive demo will showcase the potential of light curtains for applications such as safe-zone monitoring, depth imaging, and self-driving cars.
- PDF document
- 47.74 MB
- 27 downloads
- Download
- PDF version
- Printer-friendly version