首页> 中文期刊> 《计算机工程与应用》 >基于多智能体增强学习的公交驻站控制方法

基于多智能体增强学习的公交驻站控制方法

         

摘要

车辆驻站是减少串车现象和改善公交服务可靠性的常用且有效控制策略,其执行过程需要在随机交互的系统环境中进行动态决策。考虑实时公交运营信息的可获得性,研究智能体完全合作环境下公交车辆驻站增强学习控制问题,建立基于多智能体系统的单线公交控制概念模型,描述学习框架下包括智能体状态、动作集、收益函数、协调机制等主要元素,采用hysteretic Q-learning算法求解问题。仿真实验结果表明该方法能有效防止串车现象并保持单线公交服务系统车头时距的均衡性。%Vehicle holding is a commonly used strategy among a variety of control strategies in transit operation for improv-ing transit service reliability, whose implementation needs dynamic decision-making in an interactive and stochastic sys-tem environment. This paper introduces a novel use of a reinforcement learning framework to obtain vehicle holding autonomous control strategy in cooperative multi-agent system. Transit operation control model is developed based on multi-agent system. In the multi-agent reinforcement learning framework, each bus is modeled as an independent agent with learning abilities, for which the state, actions and reward are defined and a coordination mechanism for multiple bus agents is designed to obtain a joint holding actions. The hysteretic Q-learning algorithm is used to solve this holding prob-lem. From the simulation experiments, the results illustrate that the proposed approach is able to prevent buses from bunching and regulate bus headway.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号