首页> 外文学位 >Primal-dual nonlinear rescaling methods in constrained optimization.
【24h】

Primal-dual nonlinear rescaling methods in constrained optimization.

机译:约束优化中的原始-对偶非线性重标度方法。

获取原文
获取原文并翻译 | 示例

摘要

The ability to solve large scale Nonlinear Programming problems (NLP) is critical in mechanics, structural optimization, antenna and chips design, optimal control, tomography, image reconstruction, power system optimization just to mention a few. The thesis focuses on developing theoretically well grounded and numerically efficient methods for solving large-scale constrained optimization and discrete minimax problems. Numerical realization of the developed methods, testing the software on real life applications is the second main purpose of our work. The work is based on the Nonlinear Resealing (NR) Principle in Constrained Optimization. The NR principle consists of transforming the objective function and/or the constraints set of a given constrained optimization problem into an equivalent one and using the Classical Lagrangian for the equivalent problem for both theoretical analysis and developing numerical methods. The constraints are scaled by a positive scaling parameter. The NR methods consist of finding the primal minimizer of the Lagrangian for the equivalent problem followed by the Lagrange multipliers update. The scaling parameter can be fixed or can be changed from step to step. Our main focus was on the Primal-Dual NR methods for constrained optimization with inequality constraints and discrete minimax. The Primal-Dual NR method consists of replacing the unconstrained minimization and the Lagrange multipliers update by solving the Primal-Dual (PD) system of equations. The Primal-Dual system consists of the optimality criteria for the primal minimizer and formulas for the Lagrange multipliers update. We solve the PD system by Newton's method. We developed a general Primal-Dual NR method, proved its global convergence and estimated the rate of convergence under standard assumptions on the input data. We also developed a MATLAB code based on the Primal-Dual NR method. The code was tested on a number of NLP and discrete minimax problems including COPS set. The numerical results corroborate the theory and show that the Primal-Dual NR methods are numerically stable and produce results competitive with the best known NLP solvers in terms of accuracy and number of Newton steps.
机译:解决大规模非线性规划问题(NLP)的能力在机械,结构优化,天线和芯片设计,最优控制,层析成像,图像重建,电力系统优化中至关重要。本文致力于开发理论上扎实,数值上有效的方法,以解决大规模约束优化和离散极大极小问题。对已开发方法的数值实现,在实际应用中测试软件是我们工作的第二个主要目的。这项工作基于约束优化中的非线性重新密封(NR)原理。 NR原理包括将一个给定的约束优化问题的目标函数和/或约束集转换为一个等效函数,并使用Classical Lagrangian求解该等效问题以进行理论分析和开发数值方法。约束由正比例缩放参数缩放。 NR方法包括为等效问题找到Lagrangian的原始极小值,然后进行Lagrange乘数更新。缩放参数可以是固定的,也可以逐步更改。我们的主要重点是具有不等式约束和离散最小极大值的约束最优化的Primal-Dual NR方法。 Primal-Dual NR方法包括通过求解方程式的Primal-Dual(PD)系统来替换无约束最小化和Lagrange乘数更新。原始对偶系统由原始最小化器的最优性准则和拉格朗日乘数更新的公式组成。我们通过牛顿法求解PD系统。我们开发了一种通用的Primal-Dual NR方法,证明了其全局收敛性,并根据输入数据的标准假设估算了收敛速度。我们还基于Primal-Dual NR方法开发了MATLAB代码。该代码已在许多NLP和离散的minimax问题(包括COPS集)上进行了测试。数值结果证实了该理论,并表明Primal-Dual NR方法在数值上是稳定的,并且在准确性和牛顿步数方面产生的结果与最著名的NLP求解器具有竞争力。

著录项

  • 作者

    Griva, Igor.;

  • 作者单位

    George Mason University.;

  • 授予单位 George Mason University.;
  • 学科 Operations Research.
  • 学位 Ph.D.
  • 年度 2002
  • 页码 151 p.
  • 总页数 151
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 运筹学;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号