首页> 外文会议>Conference on empirical methods in natural language processing >Resolving Referring Expressions in Conversational Dialogs for Natural User Interfaces
【24h】

Resolving Referring Expressions in Conversational Dialogs for Natural User Interfaces

机译:在自然用户界面的对话对话框中解析引用表达

获取原文

摘要

Unlike traditional over-the-phone spoken dialog systems (SDSs), modern dialog systems tend to have visual rendering on the device screen as an additional modality to communicate the system's response to the user. Visual display of the system's response not only changes human behavior when interacting with devices, but also creates new research areas in SDSs. Onscreen item identification and resolution in utterances is one critical problem to achieve a natural and accurate human-machine communication. We pose the problem as a classification task to correctly identify intended on-screen item(s) from user utterances. Using syntactic, semantic as well as context features from the display screen, our model can resolve different types of referring expressions with up to 90% accuracy. In the experiments we also show that the proposed model is robust to domain and screen layout changes.
机译:与传统的电话语音对话系统(SDS)不同,现代对话系统倾向于在设备屏幕上显示视觉效果,作为将系统的响应传达给用户的一种附加方式。系统响应的可视化显示不仅改变了与设备交互时的人类行为,而且在SDS中创建了新的研究领域。屏幕上项目的识别和话语解析是实现自然准确的人机通信的一个关键问题。我们将此问题作为分类任务,以根据用户的话语正确识别预期的屏幕项目。使用显示屏幕上的句法,语义以及上下文功能,我们的模型可以以高达90%的精度解析不同类型的引用表达式。在实验中,我们还表明,所提出的模型对于域和屏幕布局更改具有鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号