首页> 外文会议>IFIP WG 12.5 International Conference on artificial intelligence applications and innovations >A Representational MDL Framework for Improving Learning Power of Neural Network Formalisms
【24h】

A Representational MDL Framework for Improving Learning Power of Neural Network Formalisms

机译:提高神经网络形式主义学习力的代表性MDL框架

获取原文

摘要

Minimum description length (MDL) principle is one of the well-known solutions for overlearning problem, specifically for artificial neural networks (ANNs). Its extension is called representational MDL (RMDL) principle and takes into account that models in machine learning are always constructed within some representation. In this paper, the optimization of ANNs formalisms as information representations using the RMDL principle is considered. A novel type of ANNs is proposed by extending linear recurrent ANNs with nonlinear "synapse to synapse" connections. Most of the elementary functions are representable with these networks (in contrast to classical ANNs) and that makes them easily learnable from training datasets according to a developed method of ANN architecture optimization. Methodology for comparing quality of different representations is illustrated by applying developed method in time series prediction and robot control.
机译:最小描述长度(MDL)原理是用于覆盖问题的众所周知的解决方案之一,专门用于人工神经网络(ANNS)。其扩展名称称为代表性MDL(RMDL)原则,并考虑到机器学习的模型始终在某些情况下构建。在本文中,考虑了ANNS形式主义作为使用RMDL原理的信息表示的优化。通过将线性复发性Anns与非线性“突触突触”连接延伸,提出了一种新型的ANN。大多数基本功能与这些网络(与经典Anns相比)表示,并且根据ANN架构优化的开发方法,使它们可以轻松地从训练数据集中学习。通过在时间序列预测和机器人控制中应用开发方法来说明用于比较不同表示的质量的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号