...
首页> 外文期刊>IEEE computational intelligence magazine >Self-Supervised Representation Learning for Evolutionary Neural Architecture Search
【24h】

Self-Supervised Representation Learning for Evolutionary Neural Architecture Search

机译:进化神经结构搜索的自我监督代表学习

获取原文
获取原文并翻译 | 示例

摘要

Recently proposed neural architecture search (NAS) algorithms adopt neural predictors to accelerate architecture search. The capability of neural predictors to accurately predict the performance metrics of the neural architecture is critical to NAS, but obtaining training datasets for neural predictors is often time-consuming. How to obtain a neural predictor with high prediction accuracy using a small amount of training data is a central problem to neural predictor-based NAS. Here, a new architecture encoding scheme is first devised to calculate the graph edit distance of neural architectures, which overcomes the drawbacks of existing vector-based architecture encoding schemes.To enhance the predictive performance of neural predictors, two self-supervised learning methods are proposed to pre-train the architecture embedding part of neural predictors to generate a meaningful representation of neural architectures. The first method designs a graph neural network-based model with two independent branches and utilizes the graph edit distance of two different neural architectures as a supervision to force the model to generate meaningful architecture representations. Inspired by contrastive learning, the second method presents a new contrastive learning algorithm that utilizes a central feature vector as a proxy to contrast positive pairs against negative pairs. Experimental results illustrate that the pre-trained neural predictors can achieve comparable or superior performance compared with their supervised counterparts using only half of the training samples. The effectiveness of the proposed methods is further validated by integrating the pre-trained neural predictors into a neural predictor guided evolutionary neural architecture search (NPENAS) algorithm, which achieves state-of-the-art -performance on NASBench-101, NASBench-201, and DARTS benchmarks.
机译:最近提出的神经结构搜索(NAS)算法采用神经预测器来加速架构搜索。神经预测器能够准确预测神经结构的性能度量的能力对于NAS至关重要,但是获得神经预测器的训练数据集通常是耗时的。如何使用少量训练数据获得具有高预测精度的神经预测器是基于神经预测器的NAS的核心问题。这里,首先设计新的架构编码方案以计算神经架构的图表编辑距离,这克服了基于矢量的架构编码方案的缺点。要增强神经预测器的预测性能,提出了两个自我监督的学习方法预先列车嵌入嵌入部分神经预测器的部分,以产生神经结构的有意义的代表。第一种方法设计了一种基于图形的基于神经网络的模型,具有两个独立的分支,并利用图形编辑两个不同神经架构的距离作为强制模型生成有意义的体系结构表示的监督。通过对比度学习的启发,第二种方法提出了一种新的对比学习算法,其利用中心特征向量作为对比负对对形成对比的代理。实验结果表明,与仅使用一半的训练样本相比,预先训练的神经预测器可以达到可比或优越的性能。通过将预先训练的神经预测因子集成到神经预测导向的进化神经结构搜索(NPENAS)算法中,进一步验证了所提出的方法的有效性,该算法在NASBENCH-101,NASBENCH-201上实现了最先进的性能和飞镖基准。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号