首页> 外文会议>IEEE International Conference on Multimedia and Expo >Relationship-Aware Primal-Dual Graph Attention Network For Scene Graph Generation
【24h】

Relationship-Aware Primal-Dual Graph Attention Network For Scene Graph Generation

机译:关系感知的原始 - 双图注意视频图形生成

获取原文

摘要

The relationships and interactions between objects contain rich semantic information, which plays a crucial role in scene understanding. Existing methods do not attach great importance to the expression of relational features. To tackle this problem, we propose a novel Relationship-aware Primal-Dual Graph Attention Network (RPDGAT) to extract the comprehensive semantic features of objects and explore the sparse graph inference for scene graph generation. RPDGAT mines the inherent attributes and the relationships between objects by fusing multiple features, e.g. appearance, spatial, and category features. After feature extraction, we design a trainable relationship distance measure network to construct the robust and sparse graph structure for efficient graphical message passing. Moreover, it can preserve the contextual cues and neighboring dependency for objects and relationships from the interaction between primal and dual graphs. Extensive experimental results present the improved performance of our method over several state-of-the-art methods on the visual genome datasets.
机译:对象之间的关系和相互作用包含丰富的语义信息,这在现场了解中起着至关重要的作用。现有方法并不重要地重视关系特征的表达。为了解决这个问题,我们提出了一种新颖的关系感知原始双图注意网络(RPDGAT),以提取对象的综合语义特征,并探索场景图生成的稀疏图表推断。 RPDGAT通过融合多个功能,挖掘固有的属性和对象之间的关系,例如,外观,空间和类别功能。特征提取后,我们设计可训练的关系距离测量网络,以构建有效的图形消息传递的鲁棒和稀疏图形结构。此外,它可以保留来自原始和双图之间的交互的对象和关系的上下文提示和相邻依赖性。广泛的实验结果在视觉基因组数据集上呈现了我们对多种最先进方法的方法的改进性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号