This paper presents a state-of-the-art model for visual question answering(VQA), which won the first place in the 2017 VQA Challenge. VQA is a task ofsignificant importance for research in artificial intelligence, given itsmultimodal nature, clear evaluation protocol, and potential real-worldapplications. The performance of deep neural networks for VQA is very dependenton choices of architectures and hyperparameters. To help further research inthe area, we describe in detail our high-performing, though relatively simplemodel. Through a massive exploration of architectures and hyperparametersrepresenting more than 3,000 GPU-hours, we identified tips and tricks that leadto its success, namely: sigmoid outputs, soft training targets, image featuresfrom bottom-up attention, gated tanh activations, output embeddings initializedusing GloVe and Google Images, large mini-batches, and smart shuffling oftraining data. We provide a detailed analysis of their impact on performance toassist others in making an appropriate selection.
展开▼
机译:本文介绍了视觉问题的最先进的模型,用于视觉问题的回答(VQA),它在2017年的VQA挑战中获得了第一名。 VQA是一项任务,对人工智能研究的研究重视,鉴于其多样性的性质,清晰的评估协议以及潜在的真实世界申请。 VQA深度神经网络的性能非常依赖架构和超参数的选择。为了帮助进一步研究Inthe区域,我们详细描述了我们的高性能,虽然相对简单。通过大规模探索架构和超参数,我们确定了超过3,000个GPU - 小时的提示和技巧,即将其成功,即:符合赛难产出,软训练目标,自下而上的注意力,门控Tanh激活,输出嵌入初始化手套和初始化手套Google图像,大型批次和智能洗牌休闲数据。 We provide a detailed analysis of their impact on performance toassist others in making an appropriate selection.
展开▼