首页> 外文OA文献 >Synergizing human-machine intelligence: Visualizing, labeling, and mining the electronic health record
【2h】

Synergizing human-machine intelligence: Visualizing, labeling, and mining the electronic health record

机译:增强人机智能:可视化,标记和挖掘电子健康记录

摘要

We live in a world where data surround us in every aspect of our lives. The key challenge for humans and machines is how we can make better use of such data. Imagine what would happen if you were to have intelligent machines that could give you insight into the data. Insight that will enable you to better 1) reason about, 2) learn, and 3) understand the underlying phenomena that produced the data. The possibilities of combined human-machine intelligence are endless and will impact our lives in ways we can not even imagine today. Synergistic human-machine intelligence aims to facilitate the analytical reasoning and inference process of humans by creating machines that maximize a human's ability to 1) reason about, 2) learn, and 3) understand large, complex, and heterogeneous data. Combined human-machine intelligence is a powerful symbiosis of mutual benefit, in which we depend on the computational capabilities of the machine for the tasks we are not good at, and the machine requires human intervention for the tasks it performs poorly on. This relationship provides a compelling alternative to either approach in isolation for solving today's and tomorrow's arising data challenges. In his regard, this dissertation proposes a diverse analytical framework that leverages synergistic human-machine intelligence to maximize a human's ability to better 1) reason about, 2) learn, and 3) understand different biomedical imaging and healthcare data present in the patient's electronic health record (EHR). Correspondingly, we approach the data analyses problem from the 1) visualization, 2) labeling, and 3) mining perspective and demonstrate the efficacy of our analytics on specific application scenarios and various data domains. In the first part of this dissertation we explore the question how we can build intelligent imaging analytics that are commensurate with human capabilities and constraints, specifically for optimizing data visualization and automated labeling workflows. Our journey starts with heuristic rule-based analytical models that are derived from task-specific human knowledge. From this experience, we move on to data-driven analytics, where we adapt and combine the intelligence of the model based on prior information provided by the human and synthetic knowledge learned from partial data observations. Within this realm, we propose a novel Bayesian transductive Markov random field model that requires minimal human intervention and is able to cope with scarce label information to learn and infer object shapes in complex spatial, multimodal, spatio-temporal, and longitudinal data. We then study the question how machines can learn discriminative object representations from dense human provided label information by investigating learning and inference mechanisms that make use of deep learning architectures. The developed analytics can aid visualization and labeling tasks, which enables the interpretation and quantification of clinically relevant image information. The second part explores the question how we can build data-driven analytics for exploratory analysis in longitudinal event data that are commensurate with human capabilities and constraints. We propose human-intuitive analytics that enable the representation and discovery of interpretable event patterns to ease knowledge absorption and comprehension of the employed analytics model and the underlying data. We propose a novel doubly-constrained convolutional sparse-coding framework that learns interpretable and shift-invariant latent temporal event patterns. We apply the model to mine complex event data in EHRs. By mapping the event space to heterogeneous patient encounters in the EHR we explore the linkage between healthcare resource utilization (HRU) in relation to disease severity. This linkage may help to better understand how disease specific co-morbidities and their clinical attributes incur different HRU patterns. Such insight helps to characterize the patient's care history, which then enables the comparison against clinical practice guidelines, the discovery of prevailing practices based on common HRU group patterns, and the identification of outliers that might indicate poor patient management.
机译:我们生活在一个数据围绕着我们生活的各个方面的世界。对人类和机器的主要挑战是我们如何更好地利用这些数据。想象一下,如果您拥有可以让您深入了解数据的智能机器,将会发生什么。能够使您更好地进行1)推理,2)学习和3)理解产生数据的潜在现象的见解。结合人机智能的可能性是无限的,它将以我们今天甚至无法想象的方式影响我们的生活。协同人机智能旨在通过创建使人能够最大程度地发挥以下能力的机器来促进人类的分析推理和推理过程:1)推理,2)学习和3)理解大型,复杂且异构的数据。结合的人机智能是互惠互利的强大共生关系,在这种共生中,我们要依靠机器的计算能力来完成我们不擅长的任务,而机器则需要人为干预才能完成性能不佳的任务。这种关系为孤立解决这两种方法提供了一种引人注目的替代方案,以解决当今和未来出现的数据挑战。在他看来,本论文提出了一个多样化的分析框架,该框架利用协同的人机智能来最大化人类的能力,以更好地1)推理,2)学习和3)了解患者电子健康中存在的不同生物医学成像和医疗保健数据记录(EHR)。相应地,我们从1)可视化,2)标签和3)挖掘的角度处理数据分析问题,并展示了我们的分析在特定应用场景和各种数据域上的有效性。在本文的第一部分中,我们探讨了如何构建与人类能力和约束相称的智能成像分析的问题,特别是用于优化数据可视化和自动标记工作流的问题。我们的旅程始于基于启发式规则的分析模型,这些模型是从特定于任务的人类知识中得出的。从这一经验中,我们将继续进行数据驱动的分析,在此基础上,我们将根据人类提供的先验信息和从部分数据观测中获得的综合知识来调整和组合模型的智能。在此领域内,我们提出了一种新颖的贝叶斯转导马尔可夫随机场模型,该模型需要最少的人工干预,并且能够应对稀缺的标签信息,以学习和推断复杂空间,多峰,时空和纵向数据中的物体形状。然后,我们通过研究利用深度学习架构的学习和推理机制,研究机器如何从密集的人类提供的标签信息中学习区分对象表示的问题。开发的分析可以帮助可视化和标记任务,从而可以解释和量化临床相关图像信息。第二部分探讨了一个问题,即我们如何在与人的能力和约束相称的纵向事件数据中建立数据驱动的分析,以进行探索性分析。我们提出了直观的分析方法,可以表示和发现可解释的事件模式,以简化知识的吸收和对所用分析模型和基础数据的理解。我们提出了一种新颖的双重约束卷积稀疏编码框架,该框架学习了可解释的且移位不变的潜在时间事件模式。我们将该模型应用于EHR中的复杂事件数据挖掘。通过将事件空间映射到EHR中的异类患者,我们探索了与疾病严重性相关的医疗资源利用(HRU)之间的联系。这种联系可能有助于更好地了解特定疾病的合并症及其临床特征如何引起不同的HRU模式。这种洞察力有助于表征患者的护理历史,从而可以与临床实践指南进行比较,基于常见的HRU组模式发现流行的实践,并识别出可能表明患者管理不善的异常值。

著录项

  • 作者

    Lee Noah;

  • 作者单位
  • 年度 2011
  • 总页数
  • 原文格式 PDF
  • 正文语种 {"code":"en","name":"English","id":9}
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号