首页> 外文会议>Symposium on VLSI Technology >Considerations of Integrating Computing-In-Memory and Processing-In-Sensor into Convolutional Neural Network Accelerators for Low-Power Edge Devices
【24h】

Considerations of Integrating Computing-In-Memory and Processing-In-Sensor into Convolutional Neural Network Accelerators for Low-Power Edge Devices

机译:将内存中计算和传感器中处理集成到低功耗边缘设备的卷积神经网络加速器中的注意事项

获取原文

摘要

In quest to execute emerging deep learning algorithms at edge devices, developing low-power and low-latency deep learning accelerators (DLAs) have become top priority. To achieve this goal, data processing techniques in sensor and memory utilizing the array structure have drawn much attention. Processing-in-sensor (PIS) solutions could reduce data transfer; computing-in-memory (CIM) macros could reduce memory access and intermediate data movement. We propose a new architecture to integrate PIS and CIM to realize low-power DLA. The advantages of using these techniques and the challenges from system point-of-view are discussed.
机译:为了在边缘设备上执行新兴的深度学习算法,开发低功耗和低延迟的深度学习加速器(DLA)已成为当务之急。为了实现该目标,利用阵列结构的传感器和存储器中的数据处理技术引起了极大的关注。传感器处理(PIS)解决方案可以减少数据传输;内存中计算(CIM)宏可以减少内存访问和中间数据移动。我们提出了一种将PIS和CIM集成在一起以实现低功耗DLA的新架构。讨论了使用这些技术的优点以及从系统角度来看的挑战。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号