首页> 外文期刊>IEEE Transactions on Image Processing >Improved Saliency Detection in RGB-D Images Using Two-Phase Depth Estimation and Selective Deep Fusion
【24h】

Improved Saliency Detection in RGB-D Images Using Two-Phase Depth Estimation and Selective Deep Fusion

机译:使用两相深度估计和选择性深融合来提高RGB-D图像的显着性检测

获取原文
获取原文并翻译 | 示例
       

摘要

To solve the saliency detection problem in RGB-D images, the depth information plays a critical role in distinguishing salient objects or foregrounds from cluttered backgrounds. As the complementary component to color information, the depth quality directly dictates the subsequent saliency detection performance. However, due to artifacts and the limitation of depth acquisition devices, the quality of the obtained depth varies tremendously across different scenarios. Consequently, conventional selective fusion-based RGB-D saliency detection methods may result in a degraded detection performance in cases containing salient objects with low color contrast coupled with a low depth quality. To solve this problem, we make our initial attempt to estimate additional high-quality depth information, which is denoted by Depth(+). Serving as a complement to the original depth, Depth(+) will be fed into our newly designed selective fusion network to boost the detection performance. To achieve this aim, we first retrieve a small group of images that are similar to the given input, and then the inter-image, nonlocal correspondences are built accordingly. Thus, by using these inter-image correspondences, the overall depth can be coarsely estimated by utilizing our newly designed depth-transferring strategy. Next, we build fine-grained, object-level correspondences coupled with a saliency prior to further improve the depth quality of the previous estimation. Compared to the original depth, our newly estimated Depth(+) is potentially more informative for detection improvement. Finally, we feed both the original depth and the newly estimated Depth(+) into our selective deep fusion network, whose key novelty is to achieve an optimal complementary balance to make better decisions toward improving saliency boundaries.
机译:为了解决RGB-D图像中的显着性检测问题,深度信息在区分来自杂乱的背景的突出物体或前景中扮演关键作用。作为互补组件的颜色信息,深度质量直接决定随后的显着性检测性能。然而,由于伪影和深度采集装置的限制,所获得的深度的质量在不同的场景中差异很大。因此,传统的基于选择性融合的RGB-D显着性检测方法可以导致含有低颜色对比的突出对象的沉重对象,其具有低深度质量的情况下降。为了解决这个问题,我们初步尝试估计额外的高质量深度信息,其由深度(+)表示。用作原始深度的补充,深度(+)将进入我们的新设计的选择性融合网络,以提高检测性能。为实现此目的,我们首先检索类似于给定输入的一小组图像,然后是相应地构建的图像间的非识别对应关系。因此,通过使用这些图像间对应关系,通过利用我们的新设计的深度转移策略,可以粗略地估计整体深度。接下来,我们在进一步提高先前估计的深度质量之前,建立细粒度,对象级对应性,其具有显着性。与原始深度相比,我们的新估计深度(+)可能更有信息量更具信息量的检测改进。最后,我们将原始深度和新估计的深度(+)喂养到我们的选择性深融网络中,其关键新颖性是实现最佳的互补平衡,以更好地决定提高显着界限。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号