首页> 美国卫生研究院文献>Frontiers in Neuroscience >Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise
【2h】

Making Expert Decisions Easier to Fathom: On the Explainability of Visual Object Recognition Expertise

机译:使专家决策更容易理解:视觉对象识别专业知识的可解释性

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In everyday life, we rely on human experts to make a variety of complex decisions, such as medical diagnoses. These decisions are typically made through some form of weakly guided learning, a form of learning in which decision expertise is gained through labeled examples rather than explicit instructions. Expert decisions can significantly affect people other than the decision-maker (for example, teammates, clients, or patients), but may seem cryptic and mysterious to them. It is therefore desirable for the decision-maker to explain the rationale behind these decisions to others. This, however, can be difficult to do. Often, the expert has a “gut feeling” for what the correct decision is, but may have difficulty giving an objective set of criteria for arriving at it. Explainability of human expert decisions, i.e., the extent to which experts can make their decisions understandable to others, has not been studied systematically. Here, we characterize the explainability of human decision-making, using binary categorical decisions about visual objects as an illustrative example. We trained a group of “expert” subjects to categorize novel, naturalistic 3-D objects called “digital embryos” into one of two hitherto unknown categories, using a weakly guided learning paradigm. We then asked the expert subjects to provide a written explanation for each binary decision they made. These experiments generated several intriguing findings. First, the expert’s explanations modestly improve the categorization performance of naïve users (paired t-tests, p < 0.05). Second, this improvement differed significantly between explanations. In particular, explanations that pointed to a spatially localized region of the object improved the user’s performance much better than explanations that referred to global features. Third, neither experts nor naïve subjects were able to reliably predict the degree of improvement for a given explanation. Finally, significant bias effects were observed, where naïve subjects rated an explanation significantly higher when told it comes from an expert user, compared to the rating of the same explanation when told it comes from another non-expert, suggesting a variant of the Asch conformity effect. Together, our results characterize, for the first time, the various issues, both methodological and conceptual, underlying the explainability of human decisions.
机译:在日常生活中,我们依靠人类专家做出各种复杂的决定,例如医学诊断。这些决策通常是通过某种形式的弱引导学习做出的,这种学习是通过标记的示例而非明确的指示来获得决策专业知识的。专家决策可能会严重影响决策者以外的其他人(例如,队友,客户或患者),但对他们而言似乎是神秘而神秘的。因此,决策者需要向其他人解释这些决策背后的理由。但是,这可能很难做到。通常,专家会对正确的决定有“胆识”,但可能很难给出客观的标准来做出决定。人类专家决策的可解释性,即专家可以使他人理解其决策的程度,尚未得到系统的研究。在此,我们以关于视觉对象的二进制分类决策为例,描述了人类决策的可解释性。我们训练了一组“专家”科目,使用弱导学习范式将称为“数字胚胎”的新颖,自然主义的3D对象分类为两个迄今未知的类别之一。然后,我们要求专业人士为他们做出的每个二元决策提供书面说明。这些实验产生了一些有趣的发现。首先,专家的解释适度地提高了天真的用户的分类性能(配对t检验,p <0.05)。其次,这种改进在两种解释之间存在显着差异。特别是,指向对象在空间上局部区域的说明比提及全局特征的说明更好地改善了用户的性能。第三,对于给定的解释,专家和天真的受试者都无法可靠地预测改善的程度。最后,观察到显着的偏见效应,与被另一位非专业人士告诉的相同解释的评价相比,未接受试验的受试者对某解释的评价明显高出其他专家,这表明该方案具有Asch一致性影响。我们的研究结果一起首次表征了人类决策的可解释性所基于的各种方法论和概念问题。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号