【24h】

Reducing the Training Times of Neural Classifiers with Dataset Condensing

机译:通过数据集压缩减少神经分类器的训练时间

获取原文
获取原文并翻译 | 示例

摘要

In this paper we apply a k-nearest-neighbour-based data condensing algorithm to the training sets of multi-layer perceptron neural networks. By removing the overlapping data and retaining only training exemplars adjacent to the decision boundary we are able to significantly speed the network training time while achieving an undegraded misclassification rate compared to a network trained on the unedited training set. We report results on a range of synthetic and real datasets which indicate that a speed-up of an order of magnitude in the network training time is typical.
机译:在本文中,我们将基于k最近邻的数据压缩算法应用于多层感知器神经网络的训练集。与在未经编辑的训练集上训练的网络相比,通过除去重叠的数据并仅保留与决策边界相邻的训练样本,我们能够显着加快网络训练时间,同时实现不降级的错误分类率。我们报告了一系列综合和真实数据集的结果,这些结果表明网络训练时间通常会加快一个数量级。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号