首页> 外文期刊>Neural Networks and Learning Systems, IEEE Transactions on >Resonant Machine Learning Based on Complex Growth Transform Dynamical Systems
【24h】

Resonant Machine Learning Based on Complex Growth Transform Dynamical Systems

机译:基于复杂生长变换动力系统的共振机器学习

获取原文
获取原文并翻译 | 示例
           

摘要

Traditional energy-based learning models associate a single energy metric to each configuration of variables involved in the underlying optimization process. Such models associate the lowest energy state with the optimal configuration of variables under consideration and are thus inherently dissipative. In this article, we propose an energy-efficient learning framework that exploits structural and functional similarities between a machine-learning network and a general electrical network satisfying Tellegen's theorem. In contrast to the standard energy-based models, the proposed formulation associates two energy components, namely, active and reactive energy with the network. The formulation ensures that the network's active power is dissipated only during the process of learning, whereas the reactive power is maintained to be zero at all times. As a result, in steady state, the learned parameters are stored and self-sustained by electrical resonance determined by the network's nodal inductances and capacitances. Based on this approach, this article introduces three novel concepts: 1) a learning framework where the network's active-power dissipation is used as a regularization for a learning objective function that is subjected to zero total reactive-power constraint; 2) a dynamical system based on complex-domain, continuous-time growth transforms that optimizes the learning objective function and drives the network toward electrical resonance under steady-state operation; and 3) an annealing procedure that controls the tradeoff between active-power dissipation and the speed of convergence. As a representative example, we show how the proposed framework can be used for designing resonant support vector machines (SVMs), where the support vectors correspond to an LC network with self-sustained oscillations. We also show that this resonant network dissipates less active power compared with its non-resonant counterpart.
机译:基于传统的能源学习模型将单个能量度量与涉及底层优化过程中涉及的变量的每个配置相关联。这种模型将最低能量状态与所考虑的变量的最佳结构相关联,因此固有地耗尽。在本文中,我们提出了一种节能的学习框架,该框架利用机器学习网络与满足Tellegen定理的一般电气网络之间的结构和功能相似性。与标准的能量基模型相比,所提出的配方与网络相关联,即,主动和无功能量。该配方确保网络的有效功率仅在学习过程中消失,而无功功率保持在始终为零。结果,在稳定状态下,通过由网络的节点电感和电容确定的电谐振来存储和自我维持。基于这种方法,本文介绍了三个新颖的概念:1)一种学习框架,其中网络的有效功率耗散用作学习客观函数的正则化,该函数对零总反应功率约束进行零; 2)基于复杂域的动态系统,连续时间增长转换,优化学习目标函数,并在稳态运行下驱动网络朝向电谐振; 3)控制主动功耗与收敛速度之间的权衡的退火程序。作为代表性示例,我们示出了所提出的框架如何用于设计谐振支持向量机(SVM),其中支持向量对应于具有自持续振荡的LC网络。我们还表明,与其非共振对应相比,该谐振网络耗散了较少的有效功率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号