首页> 外文会议>International Joint Conference on Neural Networks >Using Self-Attention LSTMs to Enhance Observations in Goal Recognition
【24h】

Using Self-Attention LSTMs to Enhance Observations in Goal Recognition

机译:使用自我注意LSTM增强目标识别中的观察力

获取原文

摘要

Goal recognition is the task of identifying the goal an observed agent is pursuing. The quality of its results depends on the quality of the observed information. In most goal recognition approaches, the accuracy significantly decreases in settings with missing observations. To mitigate this issue, we develop a learning model based on LSTMs, leveraging attention mechanisms, to enhance observed traces by predicting missing observations in goal recognition problems. We experiment using a dataset of goal recognition problems and apply the model to enhance the observation traces where missing. We evaluate the technique using a state-of-the-art goal recognizer in four different domains to compare the accuracy between the standard and the enhanced observation traces. Experimental evaluation shows that recurrent neural networks with self-attention mechanisms improve the accuracy metrics of state-of-the-art goal recognition techniques by an average of 60%.
机译:目标识别是识别观察到的代理人正在追求的目标的任务。其结果的质量取决于观察到的信息的质量。在大多数目标识别方法中,在缺少观察的环境中,准确性显着降低。为了缓解这个问题,我们通过LSTMS开发一种学习模型,利用注意机制,通过预测目标识别问题中的缺失观察来增强观察到的迹线。我们使用目标识别问题的数据集进行实验,并应用模型以增强缺失的观察痕迹。我们在四个不同域中使用最先进的目标识别器评估该技术,以比较标准和增强的观察迹线之间的准确性。实验评估表明,具有自我关注机制的经常性神经网络,提高了最先进的目标识别技术的精度度量,平均为60%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号