首页> 外文会议>Workshop on Spatial Language Understanding >Retouchdown: Releasing Touchdown on StreetLearn as a Public Resource for Language Grounding Tasks in Street View
【24h】

Retouchdown: Releasing Touchdown on StreetLearn as a Public Resource for Language Grounding Tasks in Street View

机译:Retouchdown:在街道视图中释放StreetLearn的触地得分作为一种公共资源,在街景中为语言接地任务

获取原文

摘要

The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn support both Touchdown tasks and can be used effectively for further research and comparison.
机译:Touchdown DataSet(Chen等,2019)提供人类注册器的说明,用于通过纽约市街道导航,并在给定位置解决空间描述。为了使更广泛的研究界能够有效地与触地得分任务有效地工作,我们公开发布触地得分所需的29K生街视图全景。我们遵循用于StreetLearn数据发布的过程(Mirowski等,2019),以检查全景是否适用于个人身份信息,并根据需要模糊它们。这些已被添加到StreetLearn数据集,并且可以通过与先前用于STREENLEARN使用的相同的过程获得。我们还提供了触摸任务的参考实现:视觉和语言导航(VLN)和空间描述分辨率(SDR)。我们将模型结果与陈等人提供的人进行比较。 (2019)并显示我们添加到STREESLEARN的全景支持触地得分任务,可有效用于进一步研究和比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号