首页> 外文会议>European conference on computer vision >Feed-Forward On-Edge Fine-Tuning Using Static Synthetic Gradient Modules
【24h】

Feed-Forward On-Edge Fine-Tuning Using Static Synthetic Gradient Modules

机译:使用静态合成梯度模块的前馈式上边缘微调

获取原文

摘要

Training deep learning models on embedded devices is typically avoided since this requires more memory, computation and power over inference. In this work, we focus on lowering the amount of memory needed for storing all activations, which are required during the backward pass to compute the gradients. Instead, during the forward pass, static Synthetic Gradient Modules (SGMs) predict gradients for each layer. This allows training the model in a feed-forward manner without having to store all activations. We tested our method on a robot grasping scenario where a robot needs to learn to grasp new objects given only a single demonstration. By first training the SGMs in a meta-learning manner on a set of common objects, during fine-tuning, the SGMs provided the model with accurate gradients to successfully learn to grasp new objects. We have shown that our method has comparable results to using standard backpropagation.
机译:通常避免培训嵌入式设备上的深度学习模型,因为这需要更多的内存,计算和推理。 在这项工作中,我们专注于降低存储所有激活所需的内存量,这在后向通过期间需要计算渐变。 相反,在前向通过期间,静态合成梯度模块(SGMS)预测每层的梯度。 这允许以前馈方式训练模型,而无需存储所有激活。 我们在机器人掌握方案上测试了我们的方法,其中机器人需要学习只给掌握单一演示。 通过首先在一组常见对象上以元学习方式训练SGM,在微调期间,SGMS提供了模型,以准确的渐变来成功学习掌握新对象。 我们已经表明,我们的方法具有使用标准Bospagration的可比结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号