首页> 外文期刊>Knowledge-Based Systems >Random compact Gaussian kernel: Application to ELM classification and regression
【24h】

Random compact Gaussian kernel: Application to ELM classification and regression

机译:随机紧凑的高斯内核:应用于榆树分类和回归

获取原文
获取原文并翻译 | 示例

摘要

Extreme learning machine (ELM) kernels using random feature mapping have recently gained a lot of popularity because they require low human supervision. However, the superiority in the mapping mechanism of ELM kernels is accompanied by a higher computation cost, rendering the kernel learning algorithms hard to tackle large scale learning tasks. On the other hand, the implicit mapping used in the conventional Gaussian kernel is computationally cheaper than the explicit computation of the ELM kernel, but requires trivial human intervention. This paper proposes to merge both properties by defining a new kernel, the random compact Gaussian (RCG) kernel. The random feature mapping property enables RCG kernel to save parameters selection time, while the implicitly mapping property enables RCG kernel to save kernel calculation time. The proposed kernel works by scaling one kernel parameter in the conventional Gaussian kernel to multiple kernel parameters, and generating all parameters randomly based on a continuous probability distribution. We prove that the RCG kernel is a Mercer kernel. The kernel is calculated implicitly before seeing the training samples and used to train ELMs. The experiments on 25 binary classification and regression benchmark problems show that the RCG kernel typically outperforms other competitive kernels. Compared to ELM kernel, RCG kernel not only achieves the better generalization performance on most datasets, but also needs much less kernel calculation cost. In addition, the sensitivity analysis of the kernel parameters of k-fold cross validation is conducted and the results show that the RCG kernel is robust and stable for repeated trials. (C) 2021 Elsevier B.V. All rights reserved.
机译:使用随机特征映射的极端学习机(ELM)内核最近获得了很多人气,因为它们需要低人类监督。然而,ELM内核的映射机制中的优越性伴随着更高的计算成本,渲染难以解决大规模学习任务的内核学习算法。另一方面,传统的高斯内核中使用的隐式映射是比榆树内核的显式计算便宜便宜,但需要琐碎的人类干预。本文通过定义新内核,随机紧凑的高斯(RCG)内核来合并两个属性。随机特征映射属性使RCG内核能够保存参数选择时间,而隐式映射属性使RCG内核能够保存内核计算时间。所提出的内核通过将传统的高斯内核中的一个内核参数缩放到多个内核参数,并基于连续概率分布随机地生成所有参数。我们证明RCG内核是Mercer内核。在看到训练样本之前,隐式计算内核并用于训练榆树。对25二进制分类和回归基准问题的实验表明,RCG内核通常优于其他竞争性核。与ELM内核相比,RCG内核不仅在大多数数据集中实现了更好的泛化性能,而且还需要更少的内核计算成本。此外,进行了k倍交叉验证的核参数的灵敏度分析,结果表明RCG核是鲁棒和稳定的对反复试验。 (c)2021 elestvier b.v.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号