首页> 外文会议>22nd conference on computational natural language learning >Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension
【24h】

Comparing Attention-based Convolutional and Recurrent Neural Networks: Success and Limitations in Machine Reading Comprehension

机译:比较基于注意力的卷积神经网络和递归神经网络:机器阅读理解的成功与局限

获取原文
获取原文并翻译 | 示例

摘要

We propose a machine reading comprehension model based on the compare-aggregate framework with two-staged attention that achieves state-of-the-art results on the MovieQA question answering dataset. To investigate the limitations of our model as well as the behavioral difference between convolutional and recurrent neural networks, we generate adversarial examples to confuse the model and compare to human performance. Furthermore, we assess the generalizability of our model by analyzing its differences to human inference, drawing upon insights from cognitive science.
机译:我们提出了一个基于比较聚合框架的机器阅读理解模型,该模型具有两阶段关注,可在MovieQA问题回答数据集上获得最新的结果。为了研究模型的局限性以及卷积神经网络和递归神经网络之间的行为差​​异,我们生成了对抗性示例来混淆模型并与人类绩效进行比较。此外,我们利用认知科学的洞察力,通过分析模型与人类推理的差异来评估模型的可推广性。

著录项

  • 来源
  • 会议地点 Brussels(BE)
  • 作者单位

    Institute for Natural Language Processing (IMS) Universitaet Stuttgart, Germany;

    Institute for Natural Language Processing (IMS) Universitaet Stuttgart, Germany;

    Institute for Natural Language Processing (IMS) Universitaet Stuttgart, Germany;

    Institute for Natural Language Processing (IMS) Universitaet Stuttgart, Germany;

    Institute for Natural Language Processing (IMS) Universitaet Stuttgart, Germany;

  • 会议组织
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号