首页> 外文会议>European Conference on Computer Vision >TF-NAS: Rethinking Three Search Freedoms of Latency-Constrained Differentiable Neural Architecture Search
【24h】

TF-NAS: Rethinking Three Search Freedoms of Latency-Constrained Differentiable Neural Architecture Search

机译:TF-NAS:重新思考三种搜索自由度的延迟约束可分辨性神经结构搜索

获取原文

摘要

With the flourish of differentiable neural architecture search (NAS), automatically searching latency-constrained architectures gives a new perspective to reduce human labor and expertise. However, the searched architectures are usually suboptimal in accuracy and may have large jitters around the target latency. In this paper, we rethink three freedoms of differentiable NAS, i.e. operation-level, depth-level and width-level, and propose a novel method, named Three-Freedom NAS (TF-NAS), to achieve both good classification accuracy and precise latency constraint. For the operation-level, we present a bi-sampling search algorithm to moderate the operation collapse. For the depth-level, we introduce a sink-connecting search space to ensure the mutual exclusion between skip and other candidate operations, as well as eliminate the architecture redundancy. For the width-level, we propose an elasticity-scaling strategy that achieves precise latency constraint in a progressively fine-grained manner. Experiments on ImageNet demonstrate the effectiveness of TF-NAS. Particularly, our searched TF-NAS-A obtains 76.9% top-1 accuracy, achieving state-of-the-art results with less latency.
机译:随着可微分的神经结构搜索(NAS)的蓬勃发展,自动搜索延迟约束架构给出了减少人工和专业知识的新视角。但是,搜索的体系结构通常是准确性的次优,可以在目标延迟周围具有大的抖动。在本文中,我们重新考虑了可分辨率NAS的三种自由,即操作级,深度和宽度水平,并提出一种新的方法,命名为三自由NAS(TF-NAS),以实现良好的分类精度和精确延迟约束。对于操作级别,我们介绍了一项双采样搜索算法,以适量运行崩溃。对于深层次的,我们引入了一个下沉式连接的搜索空间,以保证跳跃和其他候选操作之间的相互排斥,以及消除冗余的架构。对于宽度级,我们提出了一种弹性 - 缩放策略,以逐步细粒粒度的方式实现精确的延迟约束。 Imagenet的实验证明了TF-NAS的有效性。特别是,我们搜索的TF-NAS-A获得76.9%的前1个精度,实现最先进的结果,延迟减少。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号