Visible to the public Distributed Coordination of Agents For Air Traffic Flow Management

Abstract: This project addresses the management of the air traffic system, a cyber-physical system where the need for a tight connection between the computational algorithms and the physical system is critical to safe, reliable and efficient performance. Indeed, the lack of this tight connection is one of the reasons current systems are overwhelmed by the ever increasing traffic and suffer when there is any deviation from the expected (e.g., changing weather).

Multiagent coordination algorithms are ideally suited to address this problem. However, to be applicable to this complex real world problem, the interactions among the agents need to be taken into account before the contributions of an agent can be ascertained. In this project, we study the impact of agent actions, rewards and interactions on system performance using data from real air traffic systems. The objectives of this project are to:

1. Derive reward estimation kernels to augment a new event-based air traffic simulator;
2. Analyze the impact of modifying agent actions and rewards; and,
3. Demonstrate the effectiveness of selecting agents' actions and rewards with real air traffic

data obtained from historical congestion scenarios.

The intellectual merit of this project lies in its addressing the agent coordination problem in a physical setting by shifting the focus from "how to learn" to "what to learn." This paradigm shift allows us to separate the advances in learning algorithms from the reward functions used to tie those learning systems into physical systems. By exploring agent reward functions that implicitly model agent interactions based on feedback from the real world, we aim to build cyber-physical systems where an agent that learns to optimize its own reward leads to the optimization of the system objective function.

The broader Impact of this proposal is in providing new air traffic flow management algorithms that will significantly reduce air traffic congestion. The potential impact cannot only be measured in currency ($41B loss in 2007) but in terms of improved experience by all travelers, providing a significant benefit to society. In addition, the PIs will use this project to train graduate and undergraduate students (i) by developing new courses in multiagent learning for transportation systems; and (ii) by providing summer internship opportunities at NASA Ames Research Center.

Progress to date has been significant with completion of initial version of three objectives and expansion of first and second year activities to include more realistic domains, improved performance and increase accuracy. To support the second objective, two feasible action spaces have been identified to augment current research in airborne meeting to control flows: 1) ground holding, and 2) rerouting. Ground holding is important as it reduces airline costs, can be implemented further ahead of congestions, and can provide more flexibility than airborne holding. Rerouting is important when areas of reduced capacity are highly concentrated and can be easily avoided. To increase scalability of reward functions and to improve analysis, clustering of air traffic flows has been implemented. In addition new capabilities have been added to estimation kernels along with application of underlying technology to UAV domains.

Award ID: 0931591 and 0930168

License: 
Creative Commons 2.5

Other available formats:

Distributed Coordination of Agents For Air Traffic Flow Management