机译:用于计算内存神经加速器的设备电路架构共同探索
Univ Notre Dame Dept Comp Sci & Engn Notre Dame IN 46556 USA;
Univ Notre Dame Dept Comp Sci & Engn Notre Dame IN 46556 USA;
Univ Notre Dame Dept Comp Sci & Engn Notre Dame IN 46556 USA;
Univ Notre Dame Dept Comp Sci & Engn Notre Dame IN 46556 USA;
Univ Pittsburgh Dept Elect & Comp Engn Pittsburgh PA 15261 USA;
Univ Notre Dame Dept Comp Sci & Engn Notre Dame IN 46556 USA;
Univ Notre Dame Dept Comp Sci & Engn Notre Dame IN 46556 USA;
Computer architecture; Hardware; Neural networks; Performance evaluation; Optimization; Object recognition; Quantization (signal); Hardware; software co-design; computing-in-memory architecture; neural architecture search; neural network accelerator;
机译:用于神经网络的混合精密低功率计算内存架构
机译:UL-CNN:一种超轻型卷积神经网络,旨在用于行人识别的基于闪存的计算内存器架构
机译:计算内存神经网络系统中非易失性存储器诱导的重量误差的影响和解
机译:基于营造设备的内存神经加速器应用于神经架构搜索的不确定性建模
机译:用于神经处理加速器的内存驱动数据流优化
机译:使用时间压缩支撑多穗码的硬件尖峰神经加速器的吞吐量和效率
机译:使用Automl的图形神经网络和网络上设计的共同探索