首页> 外文会议>Association for the Advancement of Artificial Intelligence Symposium >Towards Interpretable Explanations for Transfer Learning in Sequential Tasks
【24h】

Towards Interpretable Explanations for Transfer Learning in Sequential Tasks

机译:在顺序任务中转移学习的可解释解释

获取原文

摘要

People increasingly rely on machine learning (ML) to make intelligent decisions. However, the ML results are often difficult to interpret and the algorithms do not support interaction to solicit clarification or explanation. In this paper, we highlight an emerging research area of interpretable explanations for transfer learning in sequential tasks, in which an agent must explain how it learns a new task given prior, common knowledge. The goal is to enhance a user's ability to trust and use the system output and to enable iterative feedback for improving the system. We review prior work in probabilistic systems, sequential decision-making, interpretable explanations, transfer learning, and interactive machine learning, and identify an intersection that deserves further research focus. We believe that developing adaptive, transparent learning models will build the foundation for better human-machine systems in applications for elder care, education, and health care.
机译:人们越来越依赖机器学习(ML)来制造智能决策。然而,M1结果通常难以解释,并且算法不支持互动以征求澄清或解释。在本文中,我们突出了一个新兴的研究领域,可解释解释用于在顺序任务中转移学习,其中代理商必须解释它如何在先前,共同知识所给出的新任务。目标是增强用户信任和使用系统输出的能力,并为改进系统进行迭代反馈。我们审查了概率系统,顺序决策,可解释的解释,转移学习和交互式机器学习的先前工作,并确定了值得进一步研究重点的交叉路口。我们认为,开发自适应,透明的学习模式将为更好的人机系统为老年人护理,教育和医疗保健提供的基础。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号