首页> 外文会议>European conference on computer vision >Multi-modal Unsupervised Feature Learning for RGB-D Scene Labeling
【24h】

Multi-modal Unsupervised Feature Learning for RGB-D Scene Labeling

机译:RGB-D场景标签的多模态无监督特征学习

获取原文

摘要

Most of the existing approaches for RGB-D indoor scene labeling employ hand-crafted features for each modality independently and combine them in a heuristic manner. There has been some attempt on directly learning features from raw RGB-D data, but the performance is not satisfactory. In this paper, we adapt the unsupervised feature learning technique for RGB-D labeling as a multi-modality learning problem. Our learning framework performs feature learning and feature encoding simultaneously which significantly boosts the performance. By stacking basic learning structure, higher-level features are derived and combined with lower-level features for better representing RGB-D data. Experimental results on the benchmark NYU depth dataset show that our method achieves competitive performance, compared with state-of-the-art.
机译:RGB-D室内场景的大多数方法标记为每个模态独立使用手工制作功能,并以启发式方式将它们与它们相结合。有一些尝试从原始RGB-D数据直接学习功能,但性能并不令人满意。在本文中,我们将RGB-D标记的无监督特征学习技术进行了一种多种模式学习问题。我们的学习框架同时执行特征学习和功能编码,从而显着提高了性能。通过堆叠基本的学习结构,导出更高级别的功能并与较低级别的功能组合,以便更好地表示RGB-D数据。基准NYU深度数据集的实验结果表明,我们的方法实现了竞争性能,与最先进的竞争性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号