首页> 外文会议>IEEE International Midwest Symposium on Circuits and Systems >A Study on Network Size Reduction Using Sparse Input Representation in Time Delay Neural Networks
【24h】

A Study on Network Size Reduction Using Sparse Input Representation in Time Delay Neural Networks

机译:时延神经网络中基于稀疏输入表示的网络规模缩减研究

获取原文

摘要

Neural networks are being increasingly used in many applications. While large deep neural networks are quickly advancing, for computing devices that lack powerful processors, it is desirable to use smaller neural networks. In this paper, we focus on smaller neural networks than common deep networks and examine how their size can be made even smaller without sacrificing the performance. We show that for some data types it is possible to make the networks smaller using sparse representation of the input. We have studied time delay neural networks (TDNN) for time series prediction using three different datasets. It is found that sparsifying input to the TDNN, using Discrete Cosine Transform (DCT), or Principal Component Analysis (PCA), can result in performance improvement. The improved performance can be traded off for network size reduction. Therefore, we can make the network smaller and maintain the same performance. It is found that for data that has more randomness and sudden changes in value (higher frequencies present), sparse representation methods using discrete cosine transform or through principal component analysis allow reducing network size by up to 40%.
机译:神经网络正越来越多地用于许多应用中。尽管大型深度神经网络正在迅速发展,但对于缺少功能强大的处理器的计算设备,希望使用较小的神经网络。在本文中,我们将重点放在比普通的深度网络更小的神经网络上,并研究如何在不牺牲性能的情况下使它们的大小变得更小。我们表明,对于某些数据类型,可以使用输入的稀疏表示来使网络更小。我们已经研究了时延神经网络(TDNN),用于使用三个不同的数据集进行时间序列预测。发现使用离散余弦变换(DCT)或主成分分析(PCA)稀疏输入TDNN可以提高性能。可以权衡改善的性能以减小网络大小。因此,我们可以使网络更小并保持相同的性能。发现对于具有更大随机性和值突然变化(存在更高频率)的数据,使用离散余弦变换或通过主成分分析的稀疏表示方法可以将网络规模减少多达40%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号