首页> 外文期刊>自动化学报:英文版 >BAS-ADAM:An ADAM Based Approach to Improve the Performance of Beetle Antennae Search Optimizer
【24h】

BAS-ADAM:An ADAM Based Approach to Improve the Performance of Beetle Antennae Search Optimizer

机译:BAS-ADAM:基于ADAM的提高甲壳虫天线搜索优化器性能的方法

获取原文
获取原文并翻译 | 示例
       

摘要

In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm.
机译:In this paper,we propose enhancements to Beetle Antennae search(BAS)algorithm,called BAS-ADAIVL to smoothen the convergence behavior and avoid trapping in localminima for a highly noin-convex objective function.We achieve this by adaptively adjusting the step-size in each iteration using the adaptive moment estimation(ADAM)update rule.The proposed algorithm also increases the convergence rate in a narrow valley.A key feature of the ADAM update rule is the ability to adjust the step-size for each dimension separately instead of using the same step-size.Since ADAM is traditionally used with gradient-based optimization algorithms,therefore we first propose a gradient estimation model without the need to differentiate the objective function.Resultantly,it demonstrates excellent performance and fast convergence rate in searching for the optimum of noin-convex functions.The efficiency of the proposed algorithm was tested on three different benchmark problems,including the training of a high-dimensional neural network.The performance is compared with particle swarm optimizer(PSO)and the original BAS algorithm.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号