首页> 美国卫生研究院文献>PLoS Clinical Trials >Streaming chunk incremental learning for class-wise data stream classification with fast learning speed and low structural complexity
【2h】

Streaming chunk incremental learning for class-wise data stream classification with fast learning speed and low structural complexity

机译:流式块增量学习,用于以快速的学习速度和较低的结构复杂度对类数据流进行分类

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Due to the fast speed of data generation and collection from advanced equipment, the amount of data obviously overflows the limit of available memory space and causes difficulties achieving high learning accuracy. Several methods based on discard-after-learn concept have been proposed. Some methods were designed to cope with a single incoming datum but some were designed for a chunk of incoming data. Although the results of these approaches are rather impressive, most of them are based on temporally adding more neurons to learn new incoming data without any neuron merging process which can obviously increase the computational time and space complexities. Only online versatile elliptic basis function (VEBF) introduced neuron merging to reduce the space-time complexity of learning only a single incoming datum. This paper proposed a method for further enhancing the capability of discard-after-learn concept for streaming data-chunk environment in terms of low computational time and neural space complexities. A set of recursive functions for computing the relevant parameters of a new neuron, based on statistical confidence interval, was introduced. The newly proposed method, named streaming chunk incremental learning (SCIL), increases the plasticity and the adaptabilty of the network structure according to the distribution of incoming data and their classes. When being compared to the others in incremental-like manner, based on 11 benchmarked data sets of 150 to 581,012 samples with attributes ranging from 4 to 1,558 formed as streaming data, the proposed SCIL gave better accuracy and time in most data sets.
机译:由于从先进设备生成和收集数据的速度很快,因此数据量明显超出了可用存储空间的限制,并导致难以获得较高的学习精度。提出了基于学习后丢弃概念的几种方法。一些方法旨在处理单个输入数据,但一些方法则用于处理大量输入数据。尽管这些方法的结果令人印象深刻,但大多数方法都是基于暂时添加更多的神经元来学习新的传入数据,而无需任何神经元合并过程,这显然会增加计算时间和空间复杂性。只有在线通用椭圆基函数(VEBF)引入了神经元合并,以减少仅学习单个传入数据的时空复杂性。本文提出了一种在计算时间短和神经空间复杂度较低的情况下,进一步提高流式数据块环境的“学习后丢弃”概念的能力的方法。介绍了一组基于统计置信区间计算新神经元相关参数的递归函数。新提出的方法称为流块增量学习(SCIL),可根据传入数据及其类的分布来增加网络结构的可塑性和适应性。当以渐进方式与其他数据进行比较时,基于11个基准数据集(150至581,012个样本,属性范围介于4至1,558)作为流数据,建议的SCIL在大多数数据集中提供了更好的准确性和时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号