...
首页> 外文期刊>Journal of industrial and management optimization >TABU SEARCH GUIDED BY REINFORCEMENT LEARNING FOR THE MAX-MEAN DISPERSION PROBLEM
【24h】

TABU SEARCH GUIDED BY REINFORCEMENT LEARNING FOR THE MAX-MEAN DISPERSION PROBLEM

机译:禁忌搜索以最大平均色散问题的钢筋学习为指导

获取原文
获取原文并翻译 | 示例

摘要

We present an effective hybrid metaheuristic of integrating reinforcement learning with a tabu-search (RLTS) algorithm for solving the max- mean dispersion problem. The innovative element is to design using a knowledge strategy from the Q-learning mechanism to locate promising regions when the tabu search is stuck in a local optimum. Computational experiments on extensive benchmarks show that the RLTS performs much better than stateof-the-art algorithms in the literature. From a total of 100 benchmark instances, in 60 of them, which ranged from 500 to 1,000, our proposed algorithm matched the currently best lower bounds for all instances. For the remaining 40 instances, the algorithm matched or outperformed. Furthermore, additional support was applied to present the effectiveness of the combined RL technique. The analysis sheds light on the effectiveness of the proposed RLTS algorithm.
机译:我们提出了一种与禁忌搜索(RLT)算法集成加固学习的有效混合成血栓化,用于解决最大均值分散问题。 创新的元素是在Q-Learning机制中使用知识策略设计,以定位有希望的区域,当禁忌搜索卡在局部最佳最优时。 广泛基准测试的计算实验表明,RLTS在文献中的州艺术算法中表现得多。 从总共100个基准实例,其中60个,其中500到1,000,我们的建议算法匹配所有实例的当前最佳的下限。 对于剩余的40实例,算法匹配或优于表现。 此外,施加额外的载体以呈现组合R1技术的有效性。 分析揭示了所提出的RLT算法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号