首页> 外文学位 >Discrete-time concurrent learning for system identification and applications: Leveraging memory usage for good learning
【24h】

Discrete-time concurrent learning for system identification and applications: Leveraging memory usage for good learning

机译:离散时间并发学习,以进行系统识别和应用程序:利用内存使用情况进行良好的学习

获取原文
获取原文并翻译 | 示例

摘要

Literature on system identification reveals that persistently exiting inputs are needed in order to achieve good parameter identification when using standard learning techniques such as Gradient Descent and/or Least Squares for function approximation. However, realizing persistency of excitation in itself is quite demanding, especially in the context of on-line approximation and adaptive control. Much recently, Concurrent Learning (CL), through its utilization of memory (and, in that regard, quite similarly to human learning), has been shown to be able to yield good learning without the need to resort to persistency of excitation. For all intents and purposes, we refer to "good learning" throughout this work as the ability to reconstruct the function(s) being approximated well when using the estimated parameters.;The continuous-time (CT) domain literature on CL has seen the larger share of researches. For our part, we have focused on the discrete-time (DT) domain. Tough many systems can be modeled as CT systems, usually, controlling such systems, especially real-time (or, rather close to real-time), is done via the use of digital computers and/or micro-controllers, therefore making DT framework studies compelling.;We have shown that, similarly to the CT domain, granted a less restrictive CL condition compared to that of persistency of excitation is verified, analogous CL results to that obtained in the CT domain can also be achieved in the DT domain. Before incorporating and making use of the concept of concurrent learning in our studies, we thoroughly study the Gradient Descent and Least Squares techniques for function approximation and system identification of a dimensionally complex uncertainty, which, to the best our knowledge, is yet to be done in literature. Our main contributions are however the derivations of a DT Normalized Gradient (DTNG) based CL algorithm as well as a DT Normalized Recursive Least Squared (DTNRLS) based CL algorithm for approximation of both DT structured and DT unstructured uncertainties, while showing analytically that our devised algorithms guarantee good parameter identification if the aforesaid CL condition is met.;Numerical simulations are provided to show how well the developed CL algorithms leverage memory usage to achieve good learning. The algorithms are also made use of in two applications: the discrete-time indirect adaptive control of a class of discrete-time single state plant bearing parametric or structured uncertainties and the system identification of a robot.
机译:有关系统识别的文献表明,在使用标准学习技术(例如梯度下降和/或最小二乘)进行函数逼近时,需要持续存在的输入才能实现良好的参数识别。然而,实现激励本身的持久性是非常有要求的,特别是在在线逼近和自适应控制的情况下。最近,并发学习(CL)通过利用内存(在这一点上与人类学习非常相似)被证明能够产生良好的学习而无需求助于持久性。出于所有意图和目的,我们在整个工作过程中都将“良好学习”称为“使用估计的参数时可以很好地重构函数的能力”。CL的连续时间(CT)领域文献已经看到研究份额更大。就我们而言,我们专注于离散时间(DT)域。可以将许多系统建模为CT系统,通常,通过使用数字计算机和/或微控制器来完成对此类系统的控制,尤其是实时(或接近实时)的控制,因此可以构建DT框架我们已经证明,与CT域类似,与激发持久性相比,CL条件较宽松的条件得到了验证,在DT域中也可以获得与CT域类似的CL结果。在我们的研究中纳入并利用并发学习的概念之前,我们对梯度下降和最小二乘技术进行了深入研究,以用于函数逼近和系统识别维度复杂的不确定性,据我们所知,这尚未完成。在文学中。然而,我们的主要贡献是基于DT归一化梯度(DTNG)的CL算法以及基于DT归一化递归最小二乘(DTNRLS)的CL算法的推导,用于近似DT结构化和DT非结构化不确定性,同时通过分析显示了我们的设计如果满足上述CL条件,则算法可确保良好的参数识别。;提供了数值模拟,以显示开发的CL算法如何充分利用内存使用来实现良好的学习。该算法还用于两个应用中:一类具有参数或结构不确定性的离散时间单状态植物的离散时间间接自适应控制,以及机器人的系统识别。

著录项

  • 作者单位

    University of Dayton.;

  • 授予单位 University of Dayton.;
  • 学科 Electrical engineering.;Mathematics.;Applied mathematics.;Engineering.
  • 学位 Dr.Ph.
  • 年度 2017
  • 页码 220 p.
  • 总页数 220
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 人类学;
  • 关键词

  • 入库时间 2022-08-17 11:54:23

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号