首页> 外文会议>KES 2011;International conference on knowledge-based and intelligent information and engineering systems >Collecting Semantic Information for Locations in the Scenario-Based Lexical Knowledge Resource of a Text-to-Scene Conversion System
【24h】

Collecting Semantic Information for Locations in the Scenario-Based Lexical Knowledge Resource of a Text-to-Scene Conversion System

机译:文本到场景转换系统中基于场景的词汇知识资源中位置的语义信息收集

获取原文

摘要

WordsEye is a system for automatically converting a text description of a scene into a 3D image. In converting a text description into a corresponding 3D scene, it is necessary to map objects and locations specified in the text into the actual 3D objects. Individual objects typically correspond to single 3D models, but locations (e.g. a living room) are typically an ensemble of objects. Prototypical mappings from locations to objects and their relations are called location vignettes, which are not present in existing lexical resources. In this paper we propose a new methodology using Amazon's Mechanical Turk to collect semantic information for location vignettes. Our preliminary results show that this is a promising approach.
机译:WordsEye是一个用于自动将场景的文本描述转换为3D图像的系统。在将文本描述转换为相应的3D场景时,必须将文本中指定的对象和位置映射到实际的3D对象中。单个对象通常对应于单个3D模型,但是位置(例如客厅)通常是对象的集合。从位置到对象及其关系的原型映射称为位置渐近线,在现有词汇资源中不存在。在本文中,我们提出了一种使用Amazon的Mechanical Turk收集位置信息的语义信息的新方法。我们的初步结果表明,这是一种很有前途的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号