首页> 外文会议>International conference on semantic systems >Improving NLU Training over Linked Data with Placeholder Concepts
【24h】

Improving NLU Training over Linked Data with Placeholder Concepts

机译:使用占位符概念改进对链接数据的NLU培训

获取原文

摘要

Conversational systems, also known as dialogue systems, have become increasingly popular. They can perform a variety of tasks e.g. in B2C areas such as sales and customer services. A significant amount of research has already been conducted on improving the underlying algorithms of the natural language understanding (NLU) component of dialogue systems. This paper presents an approach to generate training datasets for the NLU component from Linked Data resources. We analyze how differently designed training datasets can impact the performance of the NLU component. Whereby, the training datasets differ mainly by varying values for the injection into fixed sentence patterns. As a core contribution, we introduce and evaluate the performance of different placeholder concepts. Our results show that a trained model with placeholder concepts is capable of handling dynamic Linked Data without retraining the NLU component. Thus, our approach also contributes to the robustness of the NLU component.
机译:对话系统,也称为对话系统,已经越来越流行。他们可以执行各种任务,例如在B2C领域,例如销售和客户服务。已经进行了大量的研究来改进对话系统的自然语言理解(NLU)组件的基础算法。本文提出了一种从链接数据资源为NLU组件生成训练数据集的方法。我们分析设计不同的训练数据集如何影响NLU组件的性能。因此,训练数据集的主要区别在于,通过改变注入固定句型的值。作为核心贡献,我们介绍并评估了不同占位符概念的性能。我们的结果表明,经过训练的带有占位符概念的模型能够处理动态链接数据,而无需重新训练NLU组件。因此,我们的方法也有助于NLU组件的鲁棒性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号