Visible to the public CPS: Synergy: Image-Based Indoor Navigation for Visually Impaired UsersConflict Detection Enabled

Project Details
Lead PI:Marco Duarte
Co-PI(s):Aura Ganz
Performance Period:02/01/17 - 01/31/20
Institution(s):University of Massachusetts Amherst
Sponsor(s):National Science Foundation
Award Number:1645737
881 Reads. Placed 428 out of 804 NSF CPS Projects based on total reads on all related artifacts.
Abstract: Severe visual impairment and blindness preclude many essential activities of daily living. Among these is independent navigation in unfamiliar indoor spaces without the assistance of a sighted companion. We propose to develop PERCEPT-V: an organic vision-driven, smartphone-based indoor navigation system, in which the user can navigate in open spaces without requiring retrofit of the environment. When the user seeks to obtain navigation instructions to a chosen destination, the smartphone will record observations from multiple onboard sensors in order to perform user localization. Once the location and orientation of the user are estimated, they are used to calculate the coordinates of the navigation landmarks surrounding the user. The system can then provide directions to the chosen destination, as well as an optional description of the landmarks around the user. We will focus on addressing the cyber-physical systems technology shortcomings usually encountered in the development of indoor navigation systems. More specifically, our project will consider the following transformative aspects in the design of PERCEPT-V: (i) Image-Based Indoor Localization and Orientation: PERCEPT-V will feature new computer vision-based localization algorithms that reduce the dependence on highly controlled image capture and richly informative images, increasing the reliability of localization from images taken by blind subjects in crowded environments; (ii) Customized Navigation Instructions: PERCEPT-V will deliver customized navigation instructions for sight-impaired users that accounts for diverse levels of confidence and operator capabilities. A thorough final evaluation study featuring visually impaired participants will assess our hypotheses driving the design and refinements of PERCEPT-V using rigorous statistical analysis.