Adaptive Intelligence for Cyber-Physical Automotive Active Safety System Design and Evaluation
Objective: The objective of this project is to improve the performance and current capabilities of automotive active safety control systems by taking into account the interactions between the driver, the vehicle, the active safety system and the environment. The current approach in the design of automotive active safety systems follows the philosophy "one size fits all," in the sense that active safety systems are the same for all vehicles and do not take into account the skills, habits and state of the human driver who may operate the vehicle.
Research Approach: In order to provide customization and personalization of automotive active safety systems we utilize driver models that can predict human driver state and driving skills using recorded data. In order to achieve our main research objective in this project we plan to leverage recent advances in the area of probabilistic graphical models and machine learning algorithms to train these models from data. Specifically, we have developed algorithms to estimate the driver's skills and current state of attention from eye movement data, together with dynamic motion cues obtained from steering and pedal inputs. This information is injected into the active safety system operation in order to enhance its performance. Finally, the correct level of autonomy and workload distribution between the driver and the active safety system will ensure that no conflicts arise between the driver and the control system and the safety and passenger comfort are not compromised.
This year our work has focused on four specific tasks: First, we compared several machine-learning algorithms in terms of their suitability to develop haptic-shared ADAS, which share the control force with the human driver. In order to be able to use these ADAS, we need to know how the steering torque is provided by the driver. However, low-cost driving simulators typically measure steering angle but not steering torque. Thus, we proposed a methodology to estimate the steering-wheel torque from human subjects. Using the estimated steering torque, we trained several machine learning driver control models and compare the performance using both simulated and real human-driving data sets. Second, we proposed a new self-driving framework that uses a human driver control model, whose feature-input values are extracted using a deep neural network. Specifically, we used the well-known two-point visual driver control model as the controller, and the YOLOv2 network to extract feature-input values. We experimentally validated the proposed framework using a 1/5th scale autonomous vehicle platform. Third, we developed an approach that reliably and accurately estimates input-feature values from driver-point-of-view images. After the feature-input values are estimated, a human driver control model computes the corresponding steering-wheel angle. Numerical high-fidelity simulation results validate the importance of combining traditional structured (e.g, transfer function) models with parsimonious neural-network representations. Finally, we studied the decision-making problem of autonomous vehicles in traffic. We used reinforcement learning and inverse reinforcement learning to emulate different driving styles in traffic.
Experimental Validation: During this year, we have made great strides in finalizing our driving simulator.
The Georgia Tech simulator uses a realistic car physics engine and through a combination of CarSim and Simulink it creates a realistic driving experience. The car data is sent via ROS messages to be rendered in Unity 3D for an interactive car simulation display with negligible latency. Using a physical simulation platform equipped with a car seat, steering wheel, pedals and gearshift, the car simulator allows for a user to interact with a test track with on-ramps, off-ramps, and large curves while it records their performance for further evaluation. The GaTech driving simulator can accommodate multi-car traffic, vehicles with different traffic and sensor models, and includes the ability to incorporate many autonomous and assistive driving systems (i.e., emergency braking, adaptive cruise control, lane departure warning, etc).
PDF document
- 13.71 MB
- 56 downloads
- Download
- PDF version
- Printer-friendly version