首页> 外文期刊>International Journal for Numerical Methods in Engineering >Improved global convergence probability using multiple independent optimizations
【24h】

Improved global convergence probability using multiple independent optimizations

机译:使用多个独立的优化方法提高全局收敛概率

获取原文
获取原文并翻译 | 示例
       

摘要

For some problems global optimization algorithms may have a significant probability of not converging to the global optimum or require an extremely large number of function evaluations to reach it. For such problems, the probability of finding the global optimum may be improved by performing multiple independent short searches rather than using the entire available budget of function evaluations on a single long search. The main difficulty in adopting such a strategy is to decide how many searches to carry out for a given function evaluation budget. The basic premise of this paper is that different searches may have substantially different outcomes, but they all start with rapid initial improvement of the objective function followed by much slower progress later on. Furthermore, we assume that the number of function evaluations to the end of the initial stage of rapid progress does not change drastically from one search to another for a given problem and algorithmic setting. Therefore we propose that the number of function evaluations required for this rapid-progress stage be estimated with one or two runs, and then the same number of function evaluations be allocated to all subsequent searches. We show that these assumptions work well for the particle swarm optimization algorithm applied to a set of difficult analytical test problems with known global solutions. For these problems we show that the proposed strategy can substantially improve the probability of obtaining the global optimum for a given budget of function evaluations. We also test a Bayesian criterion for estimating the probability of having reached the global optimum at the end of the series of searches and find that it can provide a conservative estimate for most problems. Finally, we demonstrate the approach on a particularly challenging engineering design problem constructed so as to have at least 32 widely separated local optima. Copyright (c) 2006 John Wiley & Sons, Ltd.
机译:对于某些问题,全局优化算法可能有很大的可能性无法收敛到全局最优,或者需要极大量的函数评估才能达到。对于此类问题,可以通过执行多个独立的短搜索而不是在单个长搜索中使用功能评估的全部可用预算来提高找到全局最优值的可能性。采用这种策略的主要困难是决定对于给定的功能评估预算要进行多少次搜索。本文的基本前提是,不同的搜索可能会产生截然不同的结果,但它们都始于目标函数的快速初始改进,随后便是缓慢得多的进展。此外,我们假设对于给定的问题和算法设置,从快速搜索开始到最后阶段的功能评估数量不会从一次搜索到另一次搜索发生很大变化。因此,我们建议通过一次或两次运行来估算此快速进展阶段所需的功能评估次数,然后将相同数量的功能评估分配给所有后续搜索。我们证明,这些假设对于将粒子群优化算法应用于已知的全局解决方案的一组困难的分析测试问题非常有效。对于这些问题,我们表明,对于给定的功能评估预算,所提出的策略可以大大提高获得全局最优值的可能性。我们还测试了贝叶斯准则,以估计在一系列搜索结束时达到全局最优的可能性,并发现它可以为大多数问题提供保守的估计。最后,我们演示了针对一个特别具有挑战性的工程设计问题构造的方法,以便至少具有32个广泛分离的局部最优值。版权所有(c)2006 John Wiley&Sons,Ltd.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号