首页> 外文会议>Annual IEEE/ACM International Symposium on Microarchitecture >GeneSys: Enabling Continuous Learning through Neural Network Evolution in Hardware
【24h】

GeneSys: Enabling Continuous Learning through Neural Network Evolution in Hardware

机译:GeneSys:通过硬件中的神经网络演进实现持续学习

获取原文

摘要

Modern deep learning systems rely on (a) a hand-tuned neural network topology, (b) massive amounts of labeled training data, and (c) extensive training over large-scale compute resources to build a system that can perform efficient image classification or speech recognition. Unfortunately, we are still far away from implementing adaptive general purpose intelligent systems which would need to learn autonomously in unknown environments and may not have access to some or any of these three components. Reinforcement learning and evolutionary algorithm (EA) based methods circumvent this problem by continuously interacting with the environment and updating the models based on obtained rewards. However, deploying these algorithms on ubiquitous autonomous agents at the edge (robots/drones) demands extremely high energy-efficiency due to (i) tight power and energy budgets, (ii) continuous/lifelong interaction with the environment, (iii) intermittent or no connectivity to the cloud to run heavy-weight processing. To address this need, we present GENESYS, an HW-SW prototype of an EA-based learning system, that comprises a closed loop learning engine called EvE and an inference engine called ADAM. EvE can evolve the topology and weights of neural networks completely in hardware for the task at hand, without requiring hand-optimization or backpropagation training. ADAM continuously interacts with the environment and is optimized for efficiently running the irregular neural networks generated by EvE. GENESYS identifies and leverages multiple unique avenues of parallelism unique to EAs that we term "gene"- level parallelism, and "population"-level parallelism. We ran GENESYS with a suite of environments from OpenAI gym and observed 2-5 orders of magnitude higher energy-efficiency over state-of-the-art embedded and desktop CPU and GPU systems.
机译:现代深度学习系统依靠(a)手动调整的神经网络拓扑,(b)大量带标签的训练数据以及(c)在大规模计算资源上的广泛训练来构建可以执行有效图像分类或语音识别。不幸的是,我们距离实现自适应通用智能系统还很遥远,后者需要在未知环境中自主学习,并且可能无法访问这三个组件中的任何一个。基于强化学习和进化算法(EA)的方法通过与环境不断交互并基于获得的奖励更新模型来规避此问题。但是,由于(i)严格的电源和能源预算,(ii)与环境的持续/终身交互,(iii)间歇性或没有与云的连接以运行繁重的处理。为了满足这一需求,我们介绍了GENESYS,这是基于EA的学习系统的HW-SW原型,它包括一个称为EvE的闭环学习引擎和一个称为ADAM的推理引擎。 EvE可以完全通过硬件来完成手头任务的神经网络拓扑结构和权重,而无需进行人工优化或反向传播训练。 ADAM不断与环境互动,并进行了优化,以有效运行EvE生成的不规则神经网络。 GENESYS识别并利用了EA独有的多种并行性独特途径,我们将其称为“基因”级并行性和“人口”级并行性。我们在来自OpenAI健身房的一套环境中运行GENESYS,并观察到与最新的嵌入式和台式机CPU和GPU系统相比,能效提高了2-5个数量级。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号