首页> 外文OA文献 >Advances in Extreme Learning Machines
【2h】

Advances in Extreme Learning Machines

机译:极限学习机的进展

摘要

Nowadays, due to advances in technology, data is generated at an incredible pace, resulting in large data sets of ever-increasing size and dimensionality. Therefore, it is important to have efficient computational methods and machine learning algorithms that can handle such large data sets, such that they may be analyzed in reasonable time. One particular approach that has gained popularity in recent years is the Extreme Learning Machine (ELM), which is the name given to neural networks that employ randomization in their hidden layer, and that can be trained efficiently. This dissertation introduces several machine learning methods based on Extreme Learning Machines (ELMs) aimed at dealing with the challenges that modern data sets pose. The contributions follow three main directions.   Firstly, ensemble approaches based on ELM are developed, which adapt to context and can scale to large data. Due to their stochastic nature, different ELMs tend to make different mistakes when modeling data. This independence of their errors makes them good candidates for combining them in an ensemble model, which averages out these errors and results in a more accurate model. Adaptivity to a changing environment is introduced by adapting the linear combination of the models based on accuracy of the individual models over time. Scalability is achieved by exploiting the modularity of the ensemble model, and evaluating the models in parallel on multiple processor cores and graphics processor units. Secondly, the dissertation develops variable selection approaches based on ELM and Delta Test, that result in more accurate and efficient models. Scalability of variable selection using Delta Test is again achieved by accelerating it on GPU. Furthermore, a new variable selection method based on ELM is introduced, and shown to be a competitive alternative to other variable selection methods. Besides explicit variable selection methods, also a new weight scheme based on binary/ternary weights is developed for ELM. This weight scheme is shown to perform implicit variable selection, and results in increased robustness and accuracy at no increase in computational cost. Finally, the dissertation develops training algorithms for ELM that allow for a flexible trade-off between accuracy and computational time. The Compressive ELM is introduced, which allows for training the ELM in a reduced feature space. By selecting the dimension of the feature space, the practitioner can trade off accuracy for speed as required.   Overall, the resulting collection of proposed methods provides an efficient, accurate and flexible framework for solving large-scale supervised learning problems. The proposed methods are not limited to the particular types of ELMs and contexts in which they have been tested, and can easily be incorporated in new contexts and models.
机译:如今,由于技术的进步,数据以惊人的速度生成,导致大型数据集的大小和维度不断增加。因此,重要的是要有能够处理如此大的数据集的有效的计算方法和机器学习算法,以便可以在合理的时间内对其进行分析。近年来获得普及的一种特殊方法是极限学习机(Extreme Learning Machine,ELM),这是给在隐层中采用随机化并且可以有效训练的神经网络的名称。本文介绍了几种基于极限学习机(ELM)的机器学习方法,旨在应对现代数据集带来的挑战。贡献遵循三个主要方向。首先,开发了基于ELM的集成方法,该方法适用于上下文并且可以扩展到大数据。由于其随机性,不同的ELM在对数据建模时往往会犯不同的错误。他们的错误的独立性使它们成为将它们组合到集成模型中的良好候选者,该模型可以对这些错误进行平均,从而得出更准确的模型。通过根据各个模型随时间的准确性来调整模型的线性组合,可以引入对变化的环境的适应性。通过利用集成模型的模块化,并在多个处理器内核和图形处理器单元上并行评估模型,可以实现可伸缩性。其次,本文建立了基于ELM和Delta检验的变量选择方法,从而建立了更准确,有效的模型。通过在GPU上加速变量测试,再次实现了使用Delta Test进行变量选择的可伸缩性。此外,引入了一种新的基于ELM的变量选择方法,并被证明是其他变量选择方法的竞争选择。除了显式变量选择方法外,还为ELM开发了一种基于二元/三元权重的新权重方案。该权重方案显示为执行隐式变量选择,并且在不增加计算成本的情况下提高了鲁棒性和准确性。最后,本文开发了用于ELM的训练算法,该算法允许在精度和计算时间之间进行灵活的权衡。引入了压缩ELM,它允许在缩减的特征空间中训练ELM。通过选择特征空间的尺寸,从业人员可以根据需要权衡精度与速度。总体而言,所提出方法的结果集合为解决大规模监督学习问题提供了有效,准确和灵活的框架。所提出的方法不限于特定类型的ELM和已在其中进行测试的上下文,并且可以轻松地合并到新的上下文和模型中。

著录项

  • 作者

    van Heeswijk Mark;

  • 作者单位
  • 年度 2015
  • 总页数
  • 原文格式 PDF
  • 正文语种 en
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号