首页> 外文期刊>IEEE Transactions on Fuzzy Systems >An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Nonstationary Data Streams
【24h】

An Incremental Construction of Deep Neuro Fuzzy System for Continual Learning of Nonstationary Data Streams

机译:持续学习非标舞数据流的深神经模糊系统的增量构建

获取原文
获取原文并翻译 | 示例

摘要

Existing fuzzy neural networks (FNNs) are mostly developed under a shallow network configuration having lower generalization power than those of deep structures. This article proposes a novel self-organizing deep FNN, namely deep evolving fuzzy neural network (DEVFNN). Fuzzy rules can be automatically extracted from data streams or removed if they play limited role during their lifespan. The structure of the network can be deepened on demand by stacking additional layers using a drift detection method, which not only detects the covariate drift, variations of input space, but also accurately identifies the real drift, dynamic changes of both feature space and target space. The DEVFNN is developed under the stacked generalization principle via the feature augmentation concept, where a recently developed algorithm, namely generic classifier, drives the hidden layer. It is equipped by an automatic feature selection method, which controls activation and deactivation of input attributes to induce varying subsets of input features. A deep network simplification procedure is put forward using the concept of hidden layer merging to prevent the uncontrollable growth of dimensionality of input space due to the nature of the feature augmentation approach in building a deep network structure. The DEVFNN works in the samplewise fashion and is compatible for data stream applications. The efficacy of the DEVFNN has been thoroughly evaluated using seven datasets with nonstationary properties under the prequential test-then-train protocol. It has been compared with four popular continual learning algorithms and its shallow counterpart, where the DEVFNN demonstrates improvement of classification accuracy. Moreover, it is also shown that the concept of the drift detection method is an effective tool to control the depth of the network structure, while the hidden layer merging scenario is capable of simplifying the network complexity of a deep network with negligible compromise of generalization performance.
机译:现有的模糊神经网络(FNNS)主要在浅网络配置中开发,呈普通化功率低于深结构。本文提出了一种新型自组织深FNN,即深度不断发展的模糊神经网络(DEVFNN)。可以从数据流中自动提取模糊规则,或者如果在其寿命期间发挥有限的角色,则会删除。可以通过使用漂移检测方法堆叠附加层来加深网络的结构,这不仅可以检测到协变量漂移,输入空间的变化,而且准确地识别特征空间和目标空间的真实漂移,动态变化。通过功能增强概念在堆叠的泛化原理下开发了Devfnn,其中最近开发的算法,即通用分类器,驱动隐藏层。它由自动特征选择方法配备,可控制输入属性的激活和取消激活,以诱导不同的输入功能子集。使用隐藏层合并的概念提出了深度网络简化程序,以防止由于在构建深网络结构方面的特征增强方法的性质而无法控制的输入空间的维度增长。 devfnn以Squable方式工作,并兼容数据流应用程序。通过七个数据集进行了彻底评估了DEVFNN的效果,其中包含非驻留性属性,在前询问 - 然后列车协议下。它与四个受欢迎的连续学习算法及其浅浅滩进行了比较,其中Devfnn展示了分类准确性的提高。此外,还表明,漂移检测方法的概念是控制网络结构深度的有效工具,而隐藏层合并方案能够简化深网络的网络复杂性,概括的泛化性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号