...
首页> 外文期刊>Information visualization >A survey of surveys on the use of visualization for interpreting machine learning models
【24h】

A survey of surveys on the use of visualization for interpreting machine learning models

机译:对解读机器学习模型使用的调查调查

获取原文
获取原文并翻译 | 示例
           

摘要

Research in machine learning has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originating from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The interpretation of machine learning models is currently a hot topic in the information visualization community, with results showing that insights from machine learning models can lead to better predictions and improve the trustworthiness of the results. Due to this, multiple (and extensive) survey articles have been published recently trying to summarize the high number of original research papers published on the topic. But there is not always a clear definition of what these surveys cover, what is the overlap between them, which types of machine learning models they deal with, or what exactly is the scenario that the readers will find in each of them. In this article, we present a meta-analysis (i.e. a "survey of surveys") of manually collected survey papers that refer to the visual interpretation of machine learning models, including the papers discussed in the selected surveys. The aim of our article is to serve both as a detailed summary and as a guide through this survey ecosystem by acquiring, cataloging, and presenting fundamental knowledge of the state of the art and research opportunities in the area. Our results confirm the increasing trend of interpreting machine learning with visualizations in the past years, and that visualization can assist in, for example, online training processes of deep learning models and enhancing trust into machine learning. However, the question of exactly how this assistance should take place is still considered as an open challenge of the visualization community.
机译:近年来,机器学习的研究变得非常受欢迎,有许多类型的模型建议理解和预测来自不同域的数据的模式和趋势。随着这些模型越来越复杂,用户对用户的评估和信任结果越来越难以,因为它们的内部操作主要隐藏在黑匣子中。机器学习模型的解释目前是信息可视化社区中的热门话题,结果表明,机器学习模型的见解可能导致更好的预测和提高结果的可信度。由于这一点,最近出版了多种(和广泛的)调查文章,旨在总结该主题发布的大量原始研究论文。但这些调查封面并不总是明确的定义,它们之间的重叠是什么,他们处理的机器学习模型或读者在每个人中都能找到的情况。在本文中,我们提出了一个Meta分析(即,调查调查“调查论文,所述调查论文是指机器学习模型的视觉解释,包括所选调查中讨论的论文。我们的文章的目的是通过获取,编目和展示该地区的艺术状态和研究机会的基本知识,作为本次调查生态系统作为详细摘要。我们的成果确认了过去几年通过可视化的解释机器学习的趋势,并且可视化可以协助,例如,深度学习模型的在线培训过程,并加强对机器学习的信任。然而,究竟应该如何举行这种援助的问题仍被视为可视化社区的开放挑战。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号