首页> 外文期刊>Journal of machine learning research >Convergence of Distributed Asynchronous Learning Vector Quantization Algorithms
【24h】

Convergence of Distributed Asynchronous Learning Vector Quantization Algorithms

机译:分布式异步学习矢量量化算法的收敛性

获取原文
           

摘要

Motivated by the problem of effectively executing clustering algorithms on very large data sets, we address a model for large scale distributed clustering methods. To this end, we briefly recall some standards on the quantization problem and some results on the almost sure convergence of the competitive learning vector quantization (CLVQ) procedure. A general model for linear distributed asynchronous algorithms well adapted to several parallel computing architectures is also discussed. Our approach brings together this scalable model and the CLVQ algorithm, and we call the resulting technique the distributed asynchronous learning vector quantization algorithm (DALVQ). An in-depth analysis of the almost sure convergence of the DALVQ algorithm is performed. A striking result is that we prove that the multiple versions of the quantizers distributed among the processors in the parallel architecture asymptotically reach a consensus almost surely. Furthermore, we also show that these versions converge almost surely towards the same nearly optimal value for the quantization criterion. color="gray">
机译:由于在非常大的数据集上有效执行聚​​类算法的问题,我们提出了一种用于大规模分布式聚类方法的模型。为此,我们简要回顾了一些关于量化问题的标准,以及关于竞争学习矢量量化(CLVQ)过程几乎可以肯定收敛的一些结果。还讨论了很好地适应几种并行计算体系结构的线性分布式异步算法的通用模型。我们的方法将这种可扩展模型和CLVQ算法结合在一起,我们将所得技术称为分布式异步学习矢量量化算法(DALVQ)。对DALVQ算法几乎确定的收敛性进行了深入分析。一个惊人的结果是,我们证明了并行体系结构中分布在处理器之间的量化器的多个版本几乎可以肯定地渐近达成共识。此外,我们还显示,这些版本几乎可以肯定地收敛到相同的量化标准最佳值。 color =“ gray”>

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号