首页> 外文期刊>International journal of computers, communications & control >Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation
【24h】

Traffic Signal Control with Cell Transmission Model Using Reinforcement Learning for Total Delay Minimisation

机译:使用增强学习的单元传输模型进行交通信号控制,以使总时延最小化

获取原文
获取原文并翻译 | 示例
           

摘要

This paper proposes a new framework to control the traffic signal lights by applying the automated goal-directed learning and decision making scheme, namely the reinforcement learning (RL) method, to seek the best possible traffic signal actions upon changes of network state modelled by the signalised cell transmission model (CTM). This paper employs the Q-learning which is one of the RL tools in order to find the traffic signal solution because of its adaptability in finding the real time solution upon the change of states. The goal is for RL to minimise the total network delay. Surprisingly, by using the total network delay as a reward function, the results were not necessarily as good as initially expected. Rather, both simulation and mathematical derivation results confirm that using the newly proposed red light delay as the RL reward function gives better performance than using the total network delay as the reward function. The investigated scenarios include the situations where the summation of overall traffic demands exceeds the maximum flow capacity. Reported results show that our proposed framework using RL and CTM in the macroscopic level can computationally efficiently find the proper control solution close to the brute-forcely searched best periodic signal solution (BPSS). For the practical case study conducted by AIMSUN microscopic traffic simulator, the proposed CTM-based RL reveals that the reduction of the average delay can be significantly decreased by 40% with bus lane and 38% without bus lane in comparison with the case of currently used traffic signal strategy. Therefore, the CTM-based RL algorithm could be a useful tool to adjust the proper traffic signal light in practice.
机译:本文提出了一种新的框架,通过应用自动目标导向的学习和决策方案-强化学习(RL)方法,来控制交通信号灯,以便根据建模的网络状态的变化寻找最佳的交通信号动作。信号化信元传输模型(CTM)。本文采用了作为RL工具之一的Q学习来查找交通信号解决方案,因为它具有适应状态变化实时查找解决方案的适应性。 RL的目标是最大程度地减少总网络延迟。出乎意料的是,通过将总网络延迟用作奖励函数,结果不一定像最初预期的那样好。相反,仿真和数学推导结果均证实,与使用总网络延迟作为奖励函数相比,使用新提议的红光延迟作为RL奖励函数可提供更好的性能。研究的场景包括总流量需求的总和超过最大流量的情况。报告的结果表明,我们在宏观层次上使用RL和CTM提出的框架可以在计算上有效地找到接近蛮力搜索的最佳周期信号解决方案(BPSS)的适当控制解决方案。对于AIMSUN微观交通模拟器进行的实际案例研究,基于CTM的RL提出,与当前使用的情况相比,使用公交专用道时平均延迟的减少可以显着减少40%,使用无公交专用道时平均延迟的减少可以显着减少38%交通信号灯策略。因此,基于CTM的RL算法在实践中可能是调整适当交通信号灯的有用工具。

著录项

  • 来源
  • 作者单位

    Wireless Network and Future Internet Research Group, Department of Electrical Engineering, Chulalongkorn University, Thailand, 10330;

    National Electronics and Computer Technology Center, National Science and Technology Development Agency, Klong Luang, Pathumthani, Thailand, 12120;

    School of Telecommunication Engineering, Institute of Engineering, Suranaree University of Technology, Muang District, Nakhon Ratchasima,Thailand, 30000;

    Wireless Network and Future Internet Research Group, Department of Electrical Engineering, Chulalongkorn University, Thailand, 10330;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    Traffic Signal Control (TSC); Cell Transmission Model (CTM); Reinforcement Learning (RL);

    机译:交通信号控制(TSC);信元传输模型(CTM);强化学习(RL);

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号