首页> 外文会议>Instrumentation and Measurement Technology Conference (I2MTC), 2012 IEEE International >An input data set compression method for improving the training ability of neural networks
【24h】

An input data set compression method for improving the training ability of neural networks

机译:用于提高神经网络训练能力的输入数据集压缩方法

获取原文

摘要

Artificial Neural Networks (ANNs) can learn complex functions from the input data and are relatively easy to implement in any application. On the other hand, a significant disadvantage of their usage is they usually high training time-need, which scales with the structural parameters of the networks and the quantity of input data. However, this can be done offline; the training has a non-negligible cost and further, can cause a delay in the operation. To increase the speed of the training of the ANNs used for classification, we have developed a new training procedure: instead of directly using the training data in the training phase, the data is first clustered and the ANNs are trained by using only the centers of the obtained clusters (which are basically the compressed versions of the original input data).
机译:人工神经网络(ANN)可以从输入数据中学习复杂的功能,并且在任何应用中都相对容易实现。另一方面,使用它们的一个显着缺点是它们通常需要大量的训练时间,而训练时间随网络的结构参数和输入数据的数量而定。但是,这可以脱机完成。培训费用不可忽略,而且还会导致操作延迟。为了提高用于分类的人工神经网络的训练速度,我们开发了一种新的训练程序:首先在训练阶段将数据聚类,然后仅使用训练中心对神经网络进行训练,而不是在训练阶段直接使用训练数据。获得的簇(基本上是原始输入数据的压缩版本)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号