首页> 外文学位 >Data-driven synthesis of animations of spatially inflected american sign language verbs using human data.
【24h】

Data-driven synthesis of animations of spatially inflected american sign language verbs using human data.

机译:数据驱动的使用人类数据合成空间变形美国手语动词动画的合成。

获取原文
获取原文并翻译 | 示例

摘要

Techniques for producing realistic and understandable animations of American Sign Language (ASL) have accessibility benefits for signers with lower levels of written language literacy. Previous research in sign language animation didn't address the specific linguistic issue of space use and verb inflection, due to a lack of sufficiently detailed and linguistically annotated ASL corpora, which is necessary for modern data-driven approaches. In this dissertation, a high-quality ASL motion capture corpus with ASL-specific linguistic structures is collected, annotated, and evaluated using carefully designed protocols and well-calibrated motion capture equipment. In addition, ASL animations are modeled, synthesized, and evaluated based on samples of ASL signs collected from native-signer animators or from signers recorded using motion capture equipment. Part I of this dissertation focuses on how an ASL corpus is collected, including unscripted ASL passages and ASL inflecting verbs, signs in which the location and orientation of the hands is influenced by the arrangement of locations in 3D space that represent entities under discussion. Native signers are recorded in a studio with motion capture equipment: cyber-gloves, body suit, head tracker, hand tracker, and eye tracker. Part II describes how ASL animation is synthesized using our corpus of ASL inflecting verbs. Specifically, mathematical models of hand movement are trained on animation data of signs produced by a native signer. This dissertation work demonstrates that mathematical models can be trained and built using movement data collected from humans. The evaluation studies with deaf native signer participants show that the verb animations synthesized from our models have similar understandability in subjective-rating and comprehension-question scores to animations produced by a human animator, or to animations driven by a human's motion capture data. The modeling techniques in this dissertation are applicable to other types of ASL signs and to other sign languages used internationally. These models' parameterization of sign animations can increase the repertoire of generation systems and can automate the work of humans using sign language scripting systems.
机译:制作逼真的,易于理解的美国手语(ASL)动画的技术,对于具有较低书面语言素养的签名者来说,具有可访问性。由于缺乏足够详细和用语言注释的ASL语料库,以前的手语动画研究并未解决空间使用和动词偏折的特定语言问题,而这是现代数据驱动方法所必需的。本文采用精心设计的协议和经过良好校准的运动捕捉设备,对具有ASL特定语言结构的高质量ASL运动捕捉语料库进行收集,注释和评估。此外,ASL动画是根据从本机签名动画师或使用运动捕捉设备记录的签名人收集的ASL标志样本来建模,合成和评估的。本论文的第一部分重点介绍如何收集ASL语料库,包括未编写脚本的ASL段落和ASL变形动词,即表示手的位置和方向的符号受表示所讨论实体的3D空间中位置的排列影响。原始签名者在带有运动捕捉设备的工作室中进行记录:网络手套,紧身衣,头部追踪器,手部追踪器和眼睛追踪器。第二部分描述了如何使用我们的ASL屈折动词语料合成ASL动画。具体地,在由本地签名者产生的符号的动画数据上训练手运动的数学模型。这项研究工作表明,可以使用从人类收集的运动数据来训练和建立数学模型。对聋哑本地签名人参与者的评估研究表明,从我们的模型合成的动词动画在主观评分和理解问题分数上的理解度与人工动画制作的动画或人类运动捕捉数据驱动的动画相似。本文的建模技术适用于其他类型的ASL标志和国际上使用的其他标志语言。这些模型对手势动画进行参数化可以增加生成系统的功能,并可以使用手势语言脚本系统使人类的工作自动化。

著录项

  • 作者

    Lu, Pengfei.;

  • 作者单位

    City University of New York.;

  • 授予单位 City University of New York.;
  • 学科 Computer Science.;Language Linguistics.
  • 学位 Ph.D.
  • 年度 2014
  • 页码 240 p.
  • 总页数 240
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号