首页> 外文会议>International symposium on visual computing >One-Shot Learning of Sketch Categories with Co-regularized Sparse Coding
【24h】

One-Shot Learning of Sketch Categories with Co-regularized Sparse Coding

机译:草图类别的共射稀疏编码一键式学习

获取原文

摘要

Categorizing free-hand human sketches has profound implications in applications such as human computer interaction and image retrieval. The task is non-trivial due to the iconic nature of sketches, signified by large variances in both appearance and structure when compared with photographs. Prior works often utilize off-the-shelf low-level features and assume the availability of a large training set, rendering them sensitive towards abstraction and less scalable to new categories. To overcome this limitation, we propose a transfer learning framework which enables one-shot learning of sketch categories. The framework is based on a novel co-regularized sparse coding model which exploits common/shareable parts among human sketches of seen categories and transfer them to unseen categories. We contribute a new dataset consisting of 7,760 human segmented sketches from 97 object categories. Extensive experiments reveal that the proposed method can classify unseen sketch categories given just one training sample with a 33.04% accuracy, offering a two-fold improvement over baselines.
机译:对徒手绘制的人体素描进行分类在诸如人机交互和图像检索等应用中具有深远的意义。由于草图具有标志性,因此这项任务并非易事,与照片相比,其外观和结构均存在较大差异。先前的工作通常利用现成的低级功能,并假设有大量培训集可供使用,从而使它们对抽象敏感,而对新类别的可伸缩性则较差。为克服此限制,我们提出了一种转移学习框架,该框架使您可以一次性学习草图类别。该框架基于一种新颖的共规范化稀疏编码模型,该模型利用了人类在已见类别的草图中的共同/可共享部分,并将其转移到未见类别。我们贡献了一个新数据集,其中包含来自97个对象类别的7,760个人体分割草图。大量实验表明,该方法仅对一个训练样本就可以将看不见的草图类别进行分类,准确度为33.04%,比基线提高了两倍。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号