首页> 中文期刊> 《计算机工程与科学》 >CNN卷积计算在移动GPU上的加速研究

CNN卷积计算在移动GPU上的加速研究

         

摘要

Convolutional Neural Networks (CNNs) are playing an increasingly important role in areas such as image classification and speech recognition because of their excellent performance.Some researchers have already wanted to apply this deep learning process on mobile phones,but the performance of the porting program is unsatisfactory due to the huge amount of computation of CNN.In order to explore how to solve this problem,this paper uses a deep learning framework named MXNet to realize the forward process of CNN on mobile phones and focuses on the use of GPU that is another powerful computing device on the mobile phone.Based on the OpenCL common programming framework,we use matrix multiplication to compute the most time-consuming convolution in the forward process and move it to the GPU.Besides,serval improvements are made to achieve better performance.Finally,the experimental results show that we succeed in reducing the time of the forward process to half of the original time.%卷积神经网络(CNN)凭借其优秀的表现正在诸如图像分类、语音识别等领域里扮演着越来越重要的角色,已经有一些研究人员想要将这个深度学习过程复制到手机上.但是,由于CNN巨大的计算量,移植程序的性能一直难以令人满意.为了探讨如何解决这一问题,借助MXNet这样一个深度学习的框架在手机上实现了CNN的前向过程,并且将注意力放在了使用手机上另一个强大的计算设备——GPU上.最终选择使用OpenCL通用编程框架将前向过程中最耗时的卷积操作利用矩阵乘来完成,并转移到GPU上进行.在此基础之上还针对手机GPU做了一些优化.最终,实验结果显示我们成功地将前向过程的时间降低到了原来时间的一半.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号