首页> 外文会议>International Conference on Artificial Intelligence and Mobile Services;Services Conference Federation >Attention-Based Asymmetric Fusion Network for Saliency Prediction in 3D Images
【24h】

Attention-Based Asymmetric Fusion Network for Saliency Prediction in 3D Images

机译:基于关注的非对称融合网络在3D图像中的显着性预测

获取原文

摘要

Nowadays the visual saliency prediction has become a fundamental problem in 3D imaging area. In this paper, we proposed a saliency prediction model from the perspective of addressing three aspects of challenges. First, to adequately extract features of RGB and depth information, we designed an asymmetric encoder structure on the base of U-shape architecture. Second, to prevent the semantic information between salient objects and corresponding contexts from diluting in cross-modal distillation stream, we devised a global guidance module to capture high-level feature maps and deliver them into feature maps in shallower layers. Third, to locate and emphasize salient objects, we introduced a channel-wise attention model. Finally we built the refinement stream with integrated fusion strategy, gradually refining the saliency maps from coarse to fine-grained. Experiments on two widely-used datasets demonstrate the effectiveness of the proposed architecture, and the results show that our model outperforms six selective state-of-the-art models.
机译:如今,视觉显着性预测已成为3D成像区域的基本问题。在本文中,我们从解决挑战三个方面的角度提出了显着预测模型。首先,为了充分提取RGB和深度信息的特征,我们在U形架构的基础上设计了一种不对称的编码器结构。其次,为了防止突出对象之间的语义信息和对应于跨模型蒸馏流中的相应上下文之间的语义信息,我们设计了全局指导模块来捕获高级特征映射并将它们传送到较浅层中的特征映射。第三,要找到并强调突出物体,我们介绍了一个频道明智的注意模型。最后,我们用集成的融合策略建立了细化流,逐渐改进了从粗糙到细粒度的显着图。两个广泛使用的数据集的实验证明了拟议的架构的有效性,结果表明,我们的模型优于六种选择性最先进的模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号