首页> 外文会议>International Conference on Large-Scale Scientific Computing(LSSC 2005); 20050606-10; Sozopol(BG) >Approximate Gradient/Penalty Methods with General Discretization Schemes for Optimal Control Problems
【24h】

Approximate Gradient/Penalty Methods with General Discretization Schemes for Optimal Control Problems

机译:具有一般离散化方案的最优控制问题的近似梯度/罚分方法

获取原文
获取原文并翻译 | 示例

摘要

We consider an optimal control problem described by ordinary differential equations, with control and state constraints. The state equation is first discretized by a general explicit Runge-Kutta scheme and the controls are approximated by piecewise polynomial functions. We then propose approximate gradient and gradient projection methods, and their penalized versions, that construct sequences of discrete controls and progressively refine the discretization during the iterations. Instead of using the exact discrete cost derivative, which usually requires tedious calculations of composite functions, we use here an approximate derivative of the cost defined by discretizing the continuous adjoint equation by the same, but nonmatching, Runge-Kutta scheme backward and the integral involved by a Newton-Cotes integration rule. We show that strong accumulation points in L~2 of sequences constructed by these methods satisfy the weak necessary conditions for optimality for the continuous problem. Finally, numerical examples are given.
机译:我们考虑由常微分方程描述的具有控制和状态约束的最优控制问题。首先通过一般的显式Runge-Kutta方案离散状态方程,然后通过分段多项式函数近似控制。然后,我们提出了近似的梯度和梯度投影方法,以及它们的惩罚形式,该方法构造了离散控件的序列,并在迭代过程中逐步完善了离散化。代替使用通常需要繁琐的复合函数计算的精确离散成本导数,我们在这里使用成本的近似导数,该成本的近似导数是通过向后但不匹配的Runge-Kutta方案向后离散离散连续陪伴方程并涉及积分通过牛顿-科特斯积分规则。我们证明了用这些方法构造的序列的L〜2中的强累积点满足了连续问题最优性的弱必要条件。最后,给出了数值示例。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号