首页> 外文会议>AGIFORS annual symposium >Ground Delay Program Analytics with Behavioral Cloning and Inverse Reinforcement Learning
【24h】

Ground Delay Program Analytics with Behavioral Cloning and Inverse Reinforcement Learning

机译:具有行为克隆和逆强化学习的地面延迟程序分析

获取原文
获取外文期刊封面目录资料

摘要

We used historical data to build two types of model that predict Ground Delay Program implementation decisions and also produce insights into how and why those decisions are made. More specifically, we built behavioral cloning and inverse reinforcement learning models that predict hourly Ground Delay Program implementation at Newark Liberty International and San Francisco International airports. Data available to the models include actual and schedule4 air traffic metrics and observed and forecasted weather conditions. We found that the random forest behavioral cloning models we developed are substantially better at predicting hourly Ground Delay Program implementation for these airports than the inverse reinforcement learning models we developed. However, all of the models struggle to predict the initialization and cancellation of Ground Delay Programs. We also investigated the structure of the models in order to gain insights into Ground Delay Program implementation decision making. Notably, characteristics of both types of model suggest that GDP implementation decisions are more tactical than strategic: they are made primarily based on conditions now or conditions anticipated in only the next couple of hours.
机译:我们使用历史数据构建了两种类型的模型,这些模型可以预测“地面延迟计划”的实施决策,还可以深入了解如何以及为何做出这些决策。更具体地说,我们建立了行为克隆和逆强化学习模型,这些模型预测纽瓦克自由国际机场和旧金山国际机场每小时实施的地面延误计划。可用于模型的数据包括实际和时间表4的空中交通量度,以及观测和预测的天气状况。我们发现,与我们开发的逆强化学习模型相比,我们开发的随机森林行为克隆模型在预测这些机场的每小时地面延误计划实施方面要好得多。但是,所有模型都难以预测地面延迟程序的初始化和取消。我们还研究了模型的结构,以便深入了解“地面延误计划”的实施决策。值得注意的是,两种类型的模型的特征都表明GDP实施决策更具策略性,而不是战略性:它们主要是基于现在的条件或仅在接下来的几个小时内预期的条件而做出的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号