首页> 外文会议>Spoken Language Technology Workshop >Meta Learning to Classify Intent and Slot Labels with Noisy Few Shot Examples
【24h】

Meta Learning to Classify Intent and Slot Labels with Noisy Few Shot Examples

机译:Meta学习将Intent和Slot标签分类为嘈杂的少量拍摄示例

获取原文

摘要

Recently deep learning has dominated many machine learning areas, including spoken language understanding (SLU). However, deep learning models are notorious for being data-hungry, and the heavily optimized models are usually sensitive to the quality of the training examples provided and the consistency between training and inference conditions. To improve the performance of SLU models on tasks with noisy and low training resources, we propose a new SLU benchmarking task: few-shot robust SLU, where SLU comprises two core problems, intent classification (IC) and slot labeling (SL). We establish the task by defining few-shot splits on three public IC/SL datasets, ATIS, SNIPS, and TOP, and adding two types of natural noises (adaptation example missing/replacing and modality mismatch) to the splits. We further propose a novel noise-robust few-shot SLU model based on prototypical networks. We show the model consistently outperforms the conventional fine-tuning baseline and another popular meta-learning method, Model-Agnostic Meta-Learning (MAML), in terms of achieving better IC accuracy and SL F1, and yielding smaller performance variation when noises are present.
机译:最近深度学习一直主导着许多机器学习领域,包括口语理解(SLU)。然而,深度学习模型是臭名远扬的数据饥渴,以及高度优化的模型通常对所提供的培训例子的质量和培训,推理条件的一致性敏感。为了提高SLU模型与嘈杂和低资源培训任务的性能,我们提出了一个新的标杆SLU任务:几拍健壮SLU,其中SLU包括两个核心问题,意图分类(IC)和插槽标签(SL)。我们三个公共IC / SL数据集,ATIS,SNIPS,以及TOP,并加入两种自然噪声(例如适应缺少/更换和模式不匹配),以劈叉通过定义几个次分裂建立任务。我们进一步提出了一种基于原型的网络一种新型的噪声稳健几拍SLU模型。我们展示了模型的性能一直优于传统的微调基线和另一种流行的元学习方法,模型无关元学习(MAML),在实现更好的IC准确性和SL F1,并产生更小的性能变化的条件时噪音存在。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号