首页> 外文会议>International Conference on Software, Telecommunications and Computer Networks >Comparison of Q-Learning based Traffic Light Control Methods and Objective Functions
【24h】

Comparison of Q-Learning based Traffic Light Control Methods and Objective Functions

机译:基于Q学习的交通灯控制方法和目标函数的比较

获取原文

摘要

Traffic control is a cardinal issue in the life of urban areas. The traditional fixed duration traffic signal methods do not provide the most optimal solution while the volume of vehicles is constantly growing. Reinforcement learning is a promising approach for adaptive signal control that observe, learn, and select the optimal traffic light control action. In this paper, we present a comparison of Deep Reinforcement Learning based traffic optimization methods. This alternative way let the controllers to learn by the dynamics of the traffic and adapt to it. Our aim was to investigate the performance of advanced DQN variants in a single intersection environment. We examined four Q-Learning approaches: DQN, Double DQN, Dueling DQN and Double Dueling DQN with six different objective functions in a freely adaptable signal cycle environment. Our results show that the Double Dueling DQN based agent outperform the other models on the basis of different aspects and we can conclude about the possibilities of integration for real-life traffic management.
机译:交通管制是城市生活中的一个主要问题。当车辆数量不断增长时,传统的固定持续时间交通信号灯方法无法提供最佳解决方案。强化学习是一种用于观察,学习和选择最佳交通信号灯控制动作的自适应信号控制的有前途的方法。在本文中,我们将对基于深度强化学习的流量优化方法进行比较。这种替代方式使控制器可以通过流量动态学习并适应流量动态。我们的目的是研究单个交点环境中高级DQN变体的性能。我们研究了四种Q学习方法:DQN,Double DQN,Dualing DQN和Double Dueling DQN,它们在自由适应的信号周期环境中具有六个不同的目标函数。我们的结果表明,基于不同方面的基于Double Dueling DQN的代理优于其他模型,并且我们可以得出有关现实交通管理集成的可能性的结论。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号