【24h】

Unification-based Multimodal Parsing

机译:基于统一的多模式解析

获取原文

摘要

In order to realize their full potential, multimodal systems need to support not just input from multiple modes, but also synchronized integration of modes. Johnston et al (1997) model this integration using a unification operation over typed feature structures. This is an effective solution for a broad class of systems, but limits multimodal utterances to combinations of a single spoken phrase with a single gesture. We show how the unification-based approach can be scaled up to provide a full multimodal grammar formalism. In conjunction with a multidimen sional chart parser, this approach supports integration of multiple elements distributed across the spatial, temporal, and acoustic dimensions of multimodal interaction. In tegration strategies are stated in high level unificationbased rule formalism supporting rapid prototyping and iterative development of multimodal systems.
机译:为了实现完全潜力,多式联运系统需要支持不仅仅是从多种模式输入,还需要同步的模式集成。 Johnston等(1997)使用统一的功能结构模拟该集成。这是广泛的系统的有效解决方案,而是将多式联偶的话语限制为与单个手势的单个口语短语的组合。我们展示了如何扩大基于统一的方法,以提供完整的多模式语法形式主义。结合多媒体STIOM图表解析器,该方法支持分布在空间,时间和声学尺寸的多个元素的集成。在整合策略中,高级统一规则形式主义中规定支持快速原型和迭代开发多式联运系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号