首页> 外文OA文献 >Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge
【2h】

Tips and Tricks for Visual Question Answering: Learnings from the 2017 Challenge

机译:视觉问题的提示和技巧回答:2017年挑战的学习

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

This paper presents a state-of-the-art model for visual question answering(VQA), which won the first place in the 2017 VQA Challenge. VQA is a task ofsignificant importance for research in artificial intelligence, given itsmultimodal nature, clear evaluation protocol, and potential real-worldapplications. The performance of deep neural networks for VQA is very dependenton choices of architectures and hyperparameters. To help further research inthe area, we describe in detail our high-performing, though relatively simplemodel. Through a massive exploration of architectures and hyperparametersrepresenting more than 3,000 GPU-hours, we identified tips and tricks that leadto its success, namely: sigmoid outputs, soft training targets, image featuresfrom bottom-up attention, gated tanh activations, output embeddings initializedusing GloVe and Google Images, large mini-batches, and smart shuffling oftraining data. We provide a detailed analysis of their impact on performance toassist others in making an appropriate selection.
机译:本文介绍了视觉问题的最先进的模型,用于视觉问题的回答(VQA),它在2017年的VQA挑战中获得了第一名。 VQA是一项任务,对人工智能研究的研究重视,鉴于其多样性的性质,清晰的评估协议以及潜在的真实世界申请。 VQA深度神经网络的性能非常依赖架构和超参数的选择。为了帮助进一步研究Inthe区域,我们详细描述了我们的高性能,虽然相对简单。通过大规模探索架构和超参数,我们确定了超过3,000个GPU - 小时的提示和技巧,即将其成功,即:符合赛难产出,软训练目标,自下而上的注意力,门控Tanh激活,输出嵌入初始化手套和初始化手套Google图像,大型批次和智能洗牌休闲数据。 We provide a detailed analysis of their impact on performance toassist others in making an appropriate selection.

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号