首页> 外文会议>IEEE/ACM International Conference On Computer Aided Design >NASCaps: A Framework for Neural Architecture Search to Optimize the Accuracy and Hardware Efficiency of Convolutional Capsule Networks
【24h】

NASCaps: A Framework for Neural Architecture Search to Optimize the Accuracy and Hardware Efficiency of Convolutional Capsule Networks

机译:NASCaps:用于优化卷积胶囊网络的准确性和硬件效率的神经体系结构搜索框架

获取原文

摘要

Deep Neural Networks (DNNs) have made significant improvements to reach the desired accuracy to be employed in a wide variety of Machine Learning (ML) applications. Recently the Google Brain's team demonstrated the ability of Capsule Networks (CapsNets) to encode and learn spatial correlations between different input features, thereby obtaining superior learning capabilities compared to traditional (i.e., non-capsule based) DNNs. However, designing CapsNets using conventional methods is a tedious job and incurs significant training effort. Recent studies have shown that powerful methods to automatically select the best/optimal DNN model configuration for a given set of applications and a training dataset are based on the Neural Architecture Search (NAS) algorithms. Moreover, due to their extreme computational and memory requirements, DNNs are employed using the specialized hardware accelerators in IoT-Edge/CPS devices. In this paper, we propose NASCaps, an automated framework for the hardware-aware NAS of different types of DNNs, covering both traditional convolutional DNNs and CapsNets. We study the efficacy of deploying a multi-objective Genetic Algorithm (e.g., based on the NSGA-II algorithm). The proposed framework can jointly optimize the network accuracy and the corresponding hardware efficiency, expressed in terms of energy, memory, and latency of a given hardware accelerator executing the DNN inference. Besides supporting the traditional DNN layers (such as, convolutional and fully-connected), our framework is the first to model and supports the specialized capsule layers and dynamic routing in the NAS-flow. We evaluate our framework on different datasets, generating different network configurations, and demonstrate the tradeoffs between the different output metrics. We will open-source the complete framework and configurations of the Pareto-optimal architectures at https://github.com/ehw-fitascaps.
机译:深度神经网络(DNN)进行了重大改进,以达到在各种机器学习(ML)应用中采用的期望精度。最近,Google Brain的团队展示了胶囊网络(CapsNets)能够编码和学习不同输入特征之间的空间相关性的能力,从而与传统的(即基于非胶囊的)DNN相比,获得了卓越的学习能力。但是,使用常规方法设计CapsNets是一项繁琐的工作,并且需要大量的培训工作。最近的研究表明,针对神经网络搜索(NAS)算法,针对给定的一组应用程序和训练数据集自动选择最佳/最佳DNN模型配置的强大方法。此外,由于其极高的计算和内存要求,在IoT-Edge / CPS设备中使用专用的硬件加速器来使用DNN。在本文中,我们提出了NASCaps,这是一种用于不同类型DNN的硬件感知NAS的自动化框架,涵盖了传统的卷积DNN和CapsNets。我们研究了部署多目标遗传算法(例如,基于NSGA-II算法的算法)的功效。所提出的框架可以共同优化网络精度和相应的硬件效率,以能量,内存和执行DNN推理的给定硬件加速器的延迟表示。除了支持传统的DNN层(例如卷积和全连接)外,我们的框架还是第一个建模并支持NAS流中专用胶囊层和动态路由的框架。我们在不同的数据集上评估我们的框架,生成不同的网络配置,并演示不同输出指标之间的权衡。我们将在https://github.com/ehw-fit/nascaps上公开Pareto优化架构的完整框架和配置。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号