首页> 外文会议>Computers and information in engineering conference;ASME international design engineering technical conferences and computers and information in engineering conference >DEVELOPMENT OF AN INTEGRATED SIMULATION SYSTEM FOR DESIGN OF SPEECH-CENTRIC MULTIMODAL HUMAN-MACHINE INTERFACES IN AN AUTOMOTIVE COCKPIT ENVIRONMENT
【24h】

DEVELOPMENT OF AN INTEGRATED SIMULATION SYSTEM FOR DESIGN OF SPEECH-CENTRIC MULTIMODAL HUMAN-MACHINE INTERFACES IN AN AUTOMOTIVE COCKPIT ENVIRONMENT

机译:汽车驾驶舱环境中以语音为中心的多模态人机界面集成仿真系统的开发

获取原文

摘要

In the past two decades, various CAE technologies and tools have been developed for design, development and specification of the graphical user interface (GUI) of consumer products both in and outside the automotive industry. The growing trend of deploying speech interfaces by automotive manufacturers and the resulting usage of speech requires that the work be extended to speech interface modeling - an area where both technologies and methodologies are lacking.This paper presents our recent work aimed at developing a speech interface integrated with an existing GUI modeling system. A multi-contour seat was utilized as the testbed for the work. Our prototype allows one to adjust the multi-contour seat with a touchscreen GUI, a steering wheel mounted button coupled with an instrument cluster display, or a speech interface.The speech interface modeling began with an initial language model, which was developed by interviewing both the experts and novice users. The interview yielded a base corpus and necessary linguistic information for an initial speech grammar model and dialog strategy. After the module was developed it was integrated into the exiting GUI modeling system, in a way that the human voice is treated as a standard input for the system, similar to a press on the touchscreen. The multimodal prototype was used for two customer clinics. In each clinic, we asked a subject to adjust the multi-contour seat using different modalities, including the touchscreen, steering wheel mounted buttons, and the speech interface. We collected both objective and subjective data, including task completion time and customer feedback. Based on the clinic results, we refined both the language model and dialogue strategy. Our work has proven effective for developing a speech-centric, multimodal human machine interface.
机译:在过去的二十年中,已经开发了各种CAE技术和工具,用于汽车行业内外的消费产品的图形用户界面(GUI)的设计,开发和规范。汽车制造商部署语音接口的日益增长的趋势以及由此产生的语音使用要求将工作扩展到语音接口建模(这是技术和方法都欠缺的领域)。本文介绍了我们旨在开发集成语音接口的最新工作。现有的GUI建模系统。多轮廓座椅被用作工作的试验台。我们的原型允许使用触摸屏GUI,方向盘安装的按钮,仪表盘显示器或语音界面来调整多轮廓座椅。语音界面建模始于初始语言模型,该语言模型是通过对两种语言进行采访而开发的专家和新手用户。访谈产生了基本的语料库和必要的语言信息,可用于初始语音语法模型和对话策略。在开发模块之后,将其集成到现有的GUI建模系统中,从而将人的语音视为系统的标准输入,类似于在触摸屏上的按下。多模式原型被用于两个客户诊所。在每个诊所中,我们要求受试者使用不同的方式来调整多轮廓座椅,包括触摸屏,安装在方向盘上的按钮和语音界面。我们收集了客观和主观数据,包括任务完成时间和客户反馈。根据临床结果,我们改进了语言模型和对话策略。实践证明,我们的工作对于开发以语音为中心的多模式人机界面非常有效。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号