首页> 外文会议>International Conference on Automated Planning and Scheduling >Learning Interpretable Models Expressed in Linear Temporal Logic
【24h】

Learning Interpretable Models Expressed in Linear Temporal Logic

机译:学习以线性时间逻辑表示的可解释模型

获取原文

摘要

We examine the problem of learning models that characterize the high-level behavior of a system based on observation traces. Our aim is to develop models that are human interpretable. To this end, we introduce the problem of learning a Linear Temporal Logic (LTL) formula that parsimoniously captures a given set of positive and negative example traces. Our approach to learning LTL exploits a symbolic state representation, searching through a space of labeled skeleton formulae to construct an alternating automaton that models observed behavior, from which the LTL can be read off. Construction of interpretable behavior models is central to a diversity of applications related to planning and plan recognition. We showcase the relevance and significance of our work in the context of behavior description and discrimination: i) active learning of a human-interpretable behavior model that describes observed examples obtained by interaction with an oracle; ii) passive learning of a classifier that discriminates individual agents, based on the human-interpretable signature way in which they perform particular tasks. Experiments demonstrate the effectiveness of our symbolic model learning approach in providing human-interpretable models and classifiers from reduced example sets.
机译:我们检查基于观察迹线的系统高级行为的学习模型的问题。我们的目标是开发人类可诠释的模型。为此,我们介绍了学习线性时间逻辑(LTL)公式的问题,称为一组给定的正面和负例迹线。我们学习LTL的方法利用符号状态表示,搜索由标记的骨架公式的空间来构造型号观察到的行为的交替自动化,可以从中读出LTL。可解释行为模型的构建是与规划和计划认可有关的多样性应用的核心。我们在行为描述和歧视的背景下展示了我们工作的相关性和意义:i)主动学习人类可解释行为模型,该模型描述了通过与Oracle互动获得的观察到的示例; ii)基于他们执行特定任务的人为可解释的签名方式,判断单个代理的分类器的被动学习。实验证明了我们象征模型学习方法在减少示例集中提供人的可解释模型和分类器的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号