...
机译:神经网络模型压缩和硬件加速:全面调查
Tsinghua Univ Ctr Brain Inspired Comp Res Dept Precis Instrument Beijing 100084 Peoples R China|Univ Calif Santa Barbara Dept Elect & Comp Engn Santa Barbara CA 93106 USA;
Tsinghua Univ Ctr Brain Inspired Comp Res Dept Precis Instrument Beijing 100084 Peoples R China|Tsinghua Univ Beijing Innovat Ctr Future Chip Beijing 100084 Peoples R China;
MIT Dept Elect Engn & Comp Sci Cambridge MA 02139 USA;
Tsinghua Univ Ctr Brain Inspired Comp Res Dept Precis Instrument Beijing 100084 Peoples R China|Tsinghua Univ Beijing Innovat Ctr Future Chip Beijing 100084 Peoples R China;
Univ Calif Santa Barbara Dept Elect & Comp Engn Santa Barbara CA 93106 USA;
Neural networks; Tensor decomposition; Data quantization; Acceleration; Program processors; Machine learning; Task analysis; Compact neural network; data quantization; neural network acceleration; neural network compression; sparse neural network; tensor decomposition;
机译:对称$ K $ - 用于深度神经网络压缩和FPGA硬件加速的emeans
机译:模型压缩与加速度综合调查
机译:VIBNN:贝叶斯神经网络的硬件加速度
机译:具有图形硬件加速和无监督神经网络的3D对象建模
机译:机器学习硬件加速的新兴机会:从先进的神经网络实现,使用下一代技术实现超高效的深度学习框架
机译:使用神经网络模拟基于机制的生物学模型进行大规模计算加速
机译:具有可分离滤波器的二值化卷积神经网络 高效的硬件加速