...
首页> 外文期刊>Applied Soft Computing >Stochastic approximation driven particle swarm optimization with simultaneous perturbation - Who will guide the guide?
【24h】

Stochastic approximation driven particle swarm optimization with simultaneous perturbation - Who will guide the guide?

机译:随机近似驱动的同时具有扰动的粒子群优化-谁来指导该指南?

获取原文
获取原文并翻译 | 示例
           

摘要

The need for solving multi-modal optimization problems in high dimensions is pervasive in many practical applications. Particle swarm optimization (PSO) is attracting an ever-growing attention and more than ever it has found many application areas for many challenging optimization problems. It is, however, a known fact that PSO has a severe drawback in the update of its global best (gbest) particle, which has a crucial role of guiding the rest of the swarm. In this paper, we propose two efficient solutions to remedy this problem using a stochastic approximation (SA) technique. In the first approach, gbest is updated (moved) with respect to a global estimation of the gradient of the underlying (error) surface or function and hence can avoid getting trapped into a local optimum. The second approach is based on the formation of an alternative or artificial global best particle, the so-called aGB, which can replace the native gbest particle for a better guidance, the decision of which is held by a fair competition between the two. For this purpose we use simultaneous perturbation stochastic approximation (SPSA) for its low cost. Since SPSA is applied only to the gbest (not to the entire swarm), both approaches result thus in a negligible overhead cost for the entire PSO process. Both approaches are shown to significantly improve the performance of PSO over a wide range of non-linear functions, especially if SPSA parameters are well selected to fit the problem at hand. A major finding of the paper is that even if the SPSA parameters are not tuned well, results of SA-driven (SAD) PSO are still better than the best of PSO and SPSA. Since the problem of poor gbest update persists in the recently proposed extension of PSO, called multi-dimensional PSO (MD-PSO), both approaches are also integrated into MD-PSO and tested over a set of unsupervised data clustering applications. As in the basic PSO application, experimental results show that the proposed approaches significantly improved the quality of the MD-PSO clustering as measured by a validity index function. Furthermore, the proposed approaches are generic as they can be used with other PSO variants and applicable to a wide range of problems.
机译:在许多实际应用中,普遍需要解决高维多模态优化问题。粒子群优化(PSO)吸引了越来越多的关注,它比以往任何时候都发现了许多具有挑战性的优化问题的应用领域。但是,众所周知的事实是,PSO在更新其全局最佳(gbest)粒子方面存在严重缺陷,这对引导其他群体具有至关重要的作用。在本文中,我们提出了两种有效的解决方案,以使用随机逼近(SA)技术解决此问题。在第一种方法中,gbest是针对下层(误差)表面或函数的梯度的全局估计进行更新(移动)的,因此可以避免陷入局部最优状态。第二种方法是基于替代的或人工的全球最佳粒子的形成,即所谓的aGB,它可以代替天然的gbest粒子以获得更好的指导,而这两者之间的公平竞争决定了这一决定。为此,我们使用同步扰动随机逼近(SPSA)来降低成本。由于SPSA仅应用于最大的集群(而不应用于整个集群),因此这两种方法的结果对于整个PSO过程的开销成本都可以忽略不计。这两种方法均显示出可在广泛的非线性函数范围内显着提高PSO的性能,尤其是如果选择了合适的SPSA参数以适应当前问题的话。本文的主要发现是,即使SPSA参数的调整不正确,SA驱动(SAD)PSO的结果仍然优于PSO和SPSA的最佳结果。由于最差的最佳更新问题仍然存在于最近提出的PSO扩展(称为多维PSO(MD-PSO))中,因此这两种方法也都集成到了MD-PSO中,并在一组无监督的数据集群应用程序上进行了测试。与在基本PSO应用程序中一样,实验结果表明,所提出的方法显着提高了MD-PSO聚类的有效性指数函数所度量的质量。此外,提议的方法是通用的,因为它们可以与其他PSO变体一起使用,并适用于广泛的问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号