首页> 外文期刊>Mathematical methods of operations research >An implicit gradient-descent procedure for minimax problems
【24h】

An implicit gradient-descent procedure for minimax problems

机译:An implicit gradient-descent procedure for minimax problems

获取原文
获取原文并翻译 | 示例
           

摘要

A game theory inspired methodology is proposed for finding a function's saddle points. While explicit descent methods are known to have severe convergence issues, implicit methods are natural in an adversarial setting, as they take the other player's optimal strategy into account. The implicit scheme proposed has an adaptive learning rate that makes it transition to Newton's method in the neighborhood of saddle points. Convergence is shown through local analysis and through numerical examples in optimal transport and linear programming. An ad-hoc quasi-Newton method is developed for high dimensional problems, for which the inversion of the Hessian of the objective function may entail a high computational cost.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号