首页> 外文会议>IEEE International Conference on Image Processing >Fine tuning CNNS with scarce training data — Adapting imagenet to art epoch classification
【24h】

Fine tuning CNNS with scarce training data — Adapting imagenet to art epoch classification

机译:用稀缺的训练数据微调CNNS —使imagenet适应艺术时代分类

获取原文

摘要

Deep Convolutional Neural Networks (CNN) have recently been shown to outperform previous state of the art approaches for image classification. Their success must in parts be attributed to the availability of large labeled training sets such as provided by the ImageNet benchmarking initiative. When training data is scarce, however, CNNs have proven to fail to learn descriptive features. Recent research shows that supervised pre-training on external data followed by domain-specific fine-tuning yields a significant performance boost when external data and target domain show similar visual characteristics. Transfer-learning from a base task to a highly dissimilar target task, however, has not yet been fully investigated. In this paper, we analyze the performance of different feature representations for classification of paintings into art epochs. Specifically, we evaluate the impact of training set sizes on CNNs trained with and without external data and compare the obtained models to linear models based on Improved Fisher Encodings. Our results underline the superior performance of fine-tuned CNNs but likewise propose Fisher Encodings in scenarios were training data is limited.
机译:最近已证明深度卷积神经网络(CNN)优于以前的图像分类方法。它们的成功一定程度上要归功于诸如ImageNet基准测试计划所提供的大型标签培训集。但是,当培训数据稀缺时,CNN已证明无法学习描述性功能。最新研究表明,当外部数据和目标域显示相似的视觉特征时,对外部数据进行有监督的预训练,然后进行针对特定领域的微调,可以显着提高性能。但是,从基础任务到高度不同的目标任务的转移学习尚未得到充分研究。在本文中,我们分析了用于将绘画分类为艺术时期的不同特征表示的性能。具体来说,我们评估训练集大小对使用或不使用外部数据训练的CNN的影响,并将获得的模型与基于改进的Fisher编码的线性模型进行比较。我们的结果强调了经过微调的CNN的出色性能,但同样提出了在训练数据有限的情况下进行Fisher编码的情况。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号