首页> 外文期刊>IEEE transactions on multimedia >Clothes Co-Parsing Via Joint Image Segmentation and Labeling With Application to Clothing Retrieval
【24h】

Clothes Co-Parsing Via Joint Image Segmentation and Labeling With Application to Clothing Retrieval

机译:通过联合图像分割和标签对服装进行共同解析,并将其应用于服装检索

获取原文
获取原文并翻译 | 示例
           

摘要

This paper aims at developing an integrated system for clothing co-parsing (CCP), in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic configurations. A novel data-driven system consisting of two phases of inference is proposed. The first phase, referred as “image cosegmentation,” iterates to extract consistent regions on images and jointly refines the regions over all images by employing the exemplar-SVM technique . In the second phase (i.e., “region colabeling”), we construct a multiimage graphical model by taking the segmented regions as vertices, and incorporating several contexts of clothing configuration (e.g., item locations and mutual interactions). The joint label assignment can be solved using the efficient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset , we construct a dataset called the SYSU-Clothes dataset consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29%/88.23% segmentation accuracy and 65.52%/63.89% recognition rate on the Fashionista and the SYSU-Clothes datasets, respectively, which are superior compared with the previous methods. Furthermore, we apply our method on a challenging task, i.e., cross-domain clothing retrieval: given user photo depicting a clothing image, retrieving the same clothing items from online shopping stores based on the fine-grained parsing results.
机译:本文旨在开发一个用于服装协同分析(CCP)的集成系统,以便将一组服装图像(未分段但带有标签注释)联合解析为语义配置。提出了一种由两个推理阶段组成的新型数据驱动系统。第一阶段称为``图像同段细分'',它反复使用示例SVM技术提取图像上的一致区域,并联合细化所有图像上的区域。在第二阶段(即“区域共贴标签”)中,我们通过将分割的区域作为顶点并结合服装配置的多个上下文(例如,物品位置和相互交互)来构建多图像图形模型。可以使用高效的Graph Cuts算法解决联合标签分配问题。除了评估我们在Fashionista数据集上的框架外,我们还构建了一个名为SYSU-Clothes数据集的数据集,该数据集由2098张高分辨率街头时尚照片组成,以演示我们系统的性能。我们在Fashionista和SYSU-Clothes数据集上分别达到90.29%/ 88.23%的分割精度和65.52%/ 63.89%的识别率,这比以前的方法要优越。此外,我们将我们的方法应用于具有挑战性的任务,即跨域服装检索:给定用户照片描绘的服装图像,并根据细粒度的解析结果从在线购物商店中检索相同的服装。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号