首页> 外文期刊>Neural processing letters >Evolutionary Multi-task Learning for Modular Knowledge Representation in Neural Networks
【24h】

Evolutionary Multi-task Learning for Modular Knowledge Representation in Neural Networks

机译:神经网络中模块化知识表示的进化多任务学习

获取原文
获取原文并翻译 | 示例

摘要

The brain can be viewed as a complex modular structure with features of information processing through knowledge storage and retrieval. Modularity ensures that the knowledge is stored in a manner where any complications in certain modules do not affect the overall functionality of the brain. Although artificial neural networks have been very promising in prediction and recognition tasks, they are limited in terms of learning algorithms that can provide modularity in knowledge representation that could be helpful in using knowledge modules when needed. Multi-task learning enables learning algorithms to feature knowledge in general representation from several related tasks. There has not been much work done that incorporates multi-task learning for modular knowledge representation in neural networks. In this paper, we present multi-task learning for modular knowledge representation in neural networks via modular network topologies. In the proposed method, each task is defined by the selected regions in a network topology (module). Modular knowledge representation would be effective even if some of the neurons and connections are disrupted or removed from selected modules in the network. We demonstrate the effectiveness of the method using single hidden layer feedforward networks to learn selected n-bit parity problems of varying levels of difficulty. Furthermore, we apply the method to benchmark pattern classification problems. The simulation and experimental results, in general, show that the proposed method retains performance quality although the knowledge is represented as modules.
机译:大脑可以看作是一个复杂的模块化结构,具有通过知识存储和检索进行信息处理的功能。模块化可确保以某种方式存储知识,其中某些模块中的任何并发症都不会影响大脑的整体功能。尽管人工神经网络在预测和识别任务中非常有前途,但是它们在学习算法方面受到限制,可以在知识表示中提供模块化,从而在需要时有助于使用知识模块。多任务学习使学习算法能够以几种相关任务的一般表示形式呈现知识。尚未完成将多任务学习纳入神经网络中模块化知识表示的大量工作。在本文中,我们提出了通过模块化网络拓扑对神经网络中的模块化知识表示进行多任务学习。在提出的方法中,每个任务由网络拓扑(模块)中的选定区域定义。即使某些神经元和连接被破坏或从网络中选定的模块中删除,模块化的知识表示也将是有效的。我们证明了使用单隐藏层前馈网络学习不同难度级别的选定n位奇偶校验问题的方法的有效性。此外,我们将该方法应用于基准模式分类问题。总体而言,仿真和实验结果表明,尽管将知识表示为模块,但该方法仍保持了性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号