首页> 外文会议>IEEE International Conference on Acoustics, Speech and Signal Processing >A Regularized Attention Mechanism for Graph Attention Networks
【24h】

A Regularized Attention Mechanism for Graph Attention Networks

机译:图注意力网络的正则化注意力机制

获取原文

摘要

Machine learning models that can exploit the inherent structure in data have gained prominence. In particular, there is a surge in deep learning solutions for graph-structured data, due to its wide-spread applicability in several fields. Graph attention networks (GAT), a recent addition to the broad class of feature learning models in graphs, utilizes the attention mechanism to efficiently learn continuous vector representations for semi-supervised learning problems. In this paper, we perform a detailed analysis of GAT models, and present interesting insights into their behavior. In particular, we show that the models are vulnerable to heterogeneous rogue nodes and hence propose novel regularization strategies to improve the robustness of GAT models. Using benchmark datasets, we demonstrate performance improvements on semi-supervised learning, using the proposed robust variant of GAT.
机译:可以利用数据的固有结构的机器学习模型已经引起了人们的关注。特别是由于其在多个领域的广泛应用,用于图结构化数据的深度学习解决方案激增。图注意力网络(GAT)是图中功能学习模型的广泛类别中的最新成员,它利用注意力机制有效地学习了针对半监督学习问题的连续向量表示形式。在本文中,我们对GAT模型进行了详细的分析,并对它们的行为提出了有趣的见解。特别是,我们表明模型容易受到异构流氓节点的攻击,因此提出了新颖的正则化策略来提高GAT模型的鲁棒性。使用基准数据集,我们使用拟议的GAT强大变体展示了半监督学习的性能改进。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号