首页> 外文会议>IEEE Conference on Decision and Control >Characterizing the learning dynamics in extremum seeking: The role of gradient averaging and non-convexity
【24h】

Characterizing the learning dynamics in extremum seeking: The role of gradient averaging and non-convexity

机译:表征极值搜索中的学习动态:梯度平均和非凸性的作用

获取原文

摘要

We consider perturbation-based extremum seeking, which recovers an approximate gradient of an analytically unknown objective function through measurements. Using classical needle variation analysis, we are able to explicitly quantify the recovered gradient in the scalar case. We reveal that it corresponds to an averaged gradient of the objective function, even for very general extremum seeking systems. From this, we create a recursion which represents the learning dynamics along the recovered gradient. These results give rise to the interpretation that extremum seeking actually optimizes a function other than the original one. From this insight, a new perspective on global optimization of functions with local extrema emerges: because the gradient is averaged over a certain time period, local extrema might be evened out in the learning dynamics.
机译:我们考虑基于摄动的极值搜索,该极值搜索通过测量恢复了解析未知目标函数的近似梯度。使用经典的针头变化分析,我们能够明确地量化标量情况下的恢复梯度。我们发现,即使对于非常通用的极值搜索系统,它也对应于目标函数的平均梯度。由此,我们创建了一个递归,该递归表示沿着恢复的梯度的学习动态。这些结果引起了一种解释,即极值搜索实际上优化了原始功能以外的功能。通过这种见解,出现了关于具有局部极值的功能的全局最优化的新观点:由于梯度是在特定时间段内平均的,因此局部极值可能在学习动态中变得均匀。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号