首页> 外文期刊>人工知能: 人工知能学会誌 >A Robust Self-Constructing Normalized Gaussian Network for Online Machine Learning
【24h】

A Robust Self-Constructing Normalized Gaussian Network for Online Machine Learning

机译:用于在线机器学习的强大自我构建标准化高斯网络

获取原文
获取原文并翻译 | 示例
       

摘要

In this thesis, I aim to improve the robustness and applicability of Normalized Gaussian networks (NGnet) in the context of online machine learning tasks. A challenging problem in online machine learning is limited domain knowledge provoked by restricted prior knowledge, while additional domain information are obtained only sequentially over time. The limited domain knowledge makes the application of artificial neural networks (ANN) more difficult, where major challenges are negative interference and selection of an accurate model complexity. I consider these challenges for the NGnet. NGnets belong to a group of ANNs with a certain grade of robustness against negative interference due to the local properties of their network architecture. Yet, further improvements of robustness are possible in regard to its learning algorithm and model complexity selection. A recently proposed learning algorithm with localized forgetting provides robustness against negative interference, but can be further improved as it is not applicable over the full numerical range of an implied discount factor. Also, dynamic model selection was yet to be considered. In this thesis, I revise the learning algorithm with localized forgetting and adapt dynamic model selection to it in a self-constructing manner. In addition, I propose localization of some of the model selection mechanisms for improved robustness and add a new merge manipulation to deal with model redundancies. The effectiveness of the proposed method is compared in several experiments with earlier NGnet learning approaches. The proposed method possesses robust and favorable performance for the tested learning tasks, showing it to be a robust alternative when applied to online learning tasks with proneness to negative interference.
机译:在本文中,我旨在提高标准化高斯网络(NGNET)在在线机器学习任务的背景下的稳健性和适用性。在线机器学习中的一个具有挑战性的问题是受限制的现有知识引起的有限域知识,而仅在时间上依次获得额外的域信息。有限的领域知识使人工神经网络(ANN)的应用更加困难,其中主要挑战是负干扰和精确模型复杂性的选择。我考虑了对NGNet的这些挑战。 NGNET属于一组ANN,由于网络架构的本地特性,具有某种稳健性的ANN。然而,关于其学习算法和模型复杂性选择,可以进一步提高鲁棒性。最近提出的具有本地化遗忘的学习算法提供了防止负干扰的鲁棒性,但是可以进一步提高,因为它不适用于默认折扣因子的完整数值范围。此外,尚未考虑动态模型选择。在本文中,我用本地化遗忘并以自构造方式调整了本地化遗忘的学习算法。此外,我提出了一些模型选择机制的本地化,以改善鲁棒性,并添加新的合并操作以处理模型冗余。在具有早期NGNET学习方法的几个实验中比较了所提出的方法的有效性。该方法对测试的学习任务具有鲁棒性和有利的性能,显示在应用于在线学习任务时以对负干扰的对线学习任务为一种稳健的替代方案。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号