首页> 外文会议>IEEE Conference on Computer Vision and Pattern Recognition >Generation and Comprehension of Unambiguous Object Descriptions
【24h】

Generation and Comprehension of Unambiguous Object Descriptions

机译:明确对象描述的生成和理解

获取原文

摘要

We propose a method that can generate an unambiguous description (known as a referring expression) of a specific object or region in an image, and which can also comprehend or interpret such an expression to infer which object is being described. We show that our method outperforms previous methods that generate descriptions of objects without taking into account other potentially ambiguous objects in the scene. Our model is inspired by recent successes of deep learning methods for image captioning, but while image captioning is difficult to evaluate, our task allows for easy objective evaluation. We also present a new large-scale dataset for referring expressions, based on MSCOCO. We have released the dataset and a toolbox for visualization and evaluation, see https://github.com/ mjhucla/Google_Refexp_toolbox.
机译:我们提出一种方法,该方法可以生成图像中特定对象或区域的明确描述(称为引用表达式),并且还可以理解或解释此类表达式以推断正在描述哪个对象。我们展示了我们的方法优于以前的方法,这些方法在不考虑场景中其他可能含糊的对象的情况下生成对象的描述。我们的模型的灵感来自用于图像字幕的深度学习方法的最新成功,但是虽然图像字幕难以评估,但我们的任务可以轻松地进行客观评估。我们还基于MSCOCO提出了一个用于引用表达式的新的大规模数据集。我们已经发布了数据集和用于可视化和评估的工具箱,请参阅https://github.com/ mjhucla / Google_Refexp_toolbox。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号