首页> 外文期刊>Nonlinear Analysis: An International Multidisciplinary Journal >Numerical optimization for the calculus of variations by gradients on non-Hilbert Sobolev spaces using conjugate gradients and normalized differential equations of steepest descent
【24h】

Numerical optimization for the calculus of variations by gradients on non-Hilbert Sobolev spaces using conjugate gradients and normalized differential equations of steepest descent

机译:使用共轭梯度和最速下降归一化微分方程对非希尔伯特Sobolev空间上的梯度微积分进行数值优化

获取原文
获取原文并翻译 | 示例
           

摘要

The purpose of this paper is to illustrate the application of numerical optimization methods for nonquadratic functionals defined on non-Hilbert Sobolev spaces. These methods use a gradient defined on a norm-reflexive and hence strictly convex normed linear space. This gradient is defined by Michael Golomb and Richard A. Tapia in [M. Golomb,R.A. Tapia, The metric gradient in normed linear spaces, Numer. Math. 20 (1972) 115-124].It is also the same gradient described by Jean-Paul Penot in [J.P. Penot, On the convergence of descent algorithms, Comput. Optim. Appl. 23 (3) (2002) 279-284]. In this paper we shall restrict our attention to variational problems with zero boundary values. Nonzero boundary value problems can be converted to zero boundary value problems by anappropriate transformation of the dependent variables. The original functional changes by such a transformation. The connection to the calculus of variations is: The notion of a relative minimum for the Sobolev norm for p positive and large and with only first derivatives and function values is related to the classical weak relative minimum in the calculus of variations. The motivation for minimizing nonquadratic functionals on these non-Hilbert Sobolev spaces is twofold. First, a norm equivalent to this Sobolev norm approaches the norm used for weak relative minimums in the calculus of variations as papproaches infinity. Secondly, the Sobolev norm is both norm-reflexive and strictly convex so that the gradient for a non-Hilbert Sobolev space consists of a singleton set; hence,the gradient exists and is unique in this non-Hilbert normed linear space. Two gradient minimization methods are presented here. They are the conjugate gradient methods and an approach that uses differential equations of steepest descent. The Hilbert space conjugate gradient method of James Daniel in [J. Daniel, The Approximate Minimization of Functionals, Prentice-Hall, Inc., Englewood Cliffs, New Jersey, 1971], is one conjugate gradient method extended to a conjugate gradient procedure for a non-Hilbert normed linear space.
机译:本文的目的是说明数值优化方法在非希尔伯特Sobolev空间上定义的非二次函数的应用。这些方法使用在范数自反和因此严格凸范数线性空间上定义的梯度。这个梯度由Michael Golomb和Richard A. Tapia在[M.哥伦布Tapia,范数线性空间中的度量梯度,Numer。数学。 20(1972)115-124]。它也是Jean-Paul Penot在[J.P. Penot,关于下降算法的收敛,Comput。最佳应用23(3)(2002)279-284]。在本文中,我们将注意力集中在边界值为零的变分问题上。通过对因变量进行适当的转换,可以将非零边界值问题转换为零边界值问题。原始功能通过这样的转换而改变。与变异微积分的联系是:Sobolev范数的p个正和大且仅具有一阶导数和函数值的相对最小值的概念与变异微积分中的经典弱相对最小值有关。在这些非希尔伯特·索伯列夫空间上最小化非二次函数的动机是双重的。首先,等效于此Sobolev范数的范数接近变差演算中无穷小相对极小值的范数,即无穷无穷。其次,Sobolev范数既是范数自反的,又是严格凸的,因此非希尔伯特Sobolev空间的梯度包括一个单例集。因此,在这个非希尔伯特范数线性空间中,梯度存在并且是唯一的。这里介绍了两种梯度最小化方法。它们是共轭梯度法,也是使用最速下降微分方程的方法。詹姆士·丹尼尔(J. Daniel)的希尔伯特空间共轭梯度法Daniel,《功能的近似最小化》,Prentice-Hall,Inc.,新泽西州Englewood Cliffs,1971年],是一种共轭梯度方法,适用于非希尔伯特范数线性空间的共轭梯度方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号