首页> 外文OA文献 >Boosting performance of incremental IDR/QR LDA - from sequential to chunk
【2h】

Boosting performance of incremental IDR/QR LDA - from sequential to chunk

机译:提升增量IDR / QR LDa的性能 - 从顺序到块

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Training data in the real world is often presented in random chunks. Yet existing sequential incremental IDR/QR LDA (sIncLDA) can only process data one instance after another. This thesis proposes a new chunk incremental IDR/QR LDA (cIncLDA) capable of processing multiple data instances at one time. sIncLDA updates the reduced within-class scatter matrix W by a QR decomposition of the centroid matrix for each newly-arrived data instance. It is assumed that the updated Q' ≈ Q for any data instance from an existing class and the updated W' ≈ W for any data instance from a new class. In practice, the assumption in sIncLDA leads to significant loss of the discriminative information from approximating Q and W when the number of classes is large. By utilizing a new method that accurately updates W, the proposed cIncLDA can better pr eserve the discriminative information contained in W. The limitation of sIncLDA is hence resolved. Experimental comparisons have been conducted on six facial datasets with diverse class numbers ranging from 40 to 1010. The result indicates that our algorithm achieves an competitive accuracy to batch QR/LDA and is consistently higher than sIncLDA. It is noted in the report that the computational complexity of our algorithm is more expensive than sIncLDA for single data processing (i.e., sequential manner); however, the efficiency of our algorithm surpasses sIncLDA as the chunk size increases for multiple instances processing (i.e., chunk manner).
机译:现实世界中的训练数据通常以随机块的形式呈现。现有的顺序增量IDR / QR LDA(sIncLDA)只能一个接一个地处理数据。本文提出了一种能够同时处理多个数据实例的新的块增量IDR / QR LDA(cIncLDA)。 sIncLDA通过对每个新到达的数据实例进行质心矩阵的QR分解来更新缩小的类内散射矩阵W。假定对于现有类中的任何数据实例,更新后的Q'≈Q,对于新类中的任何数据实例,更新后的W'≈W。在实践中,sIncLDA中的假设会导致在类别数量很大时,由于逼近Q和W而导致区别信息的大量丢失。通过使用一种精确更新W的新方法,提出的cIncLDA可以更好地保留W中包含的区分信息。因此,sIncLDA的局限性得以解决。已经对六个面部数据集进行了实验比较,这些面部数据集的分类号范围从40到1010。结果表明,我们的算法在批量QR / LDA方面具有竞争优势,并且始终高于sIncLDA。报告中指出,对于单个数据处理(即顺序方式),我们算法的计算复杂度比sIncLDA昂贵;但是,随着多实例处理(即,块方式)的块大小增加,我们算法的效率超过了sIncLDA。

著录项

  • 作者

    Peng Yiming;

  • 作者单位
  • 年度 2011
  • 总页数
  • 原文格式 PDF
  • 正文语种 en
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号