Abstract Explaining classifier decisions linguistically for stimulating and improving operators labeling behavior
首页> 外文期刊>Information Sciences: An International Journal >Explaining classifier decisions linguistically for stimulating and improving operators labeling behavior
【24h】

Explaining classifier decisions linguistically for stimulating and improving operators labeling behavior

机译:解释分类器决策,用于刺激和改善运营商标签行为

获取原文
获取原文并翻译 | 示例
           

摘要

Abstract In decision support and classification systems, there is usually the necessity that operators or experts provide class labels for a significant number of process samples in order to be able to establish reliable machine learning classifiers. Such labels are often affected by significant uncertainty and inconsistency due to varying human’s experience and constitutions during the labeling process. This typically results in significant, unintended class overlaps. We propose several new concepts for providing enhanced explanations of classifier decisions in linguistic (human readable) form. These are intended to help operators to better understand the decision process and support them during sample annotation to improve their certainty and consistency in successive labeling cycles. This is expected to lead to better, more consistent data sets (streams) for use in training and updating classifiers. The enhanced explanations are composed of (1) grounded reasons for classification decisions, represented as linguistically readable fuzzy rules, (2) a classifier’s level of uncertainty in relation to its decisions and possible alternative suggestions, (3) the degree of novelty of current samples and (4) the levels of impact of the input features on the current classification response. The last of these are based on a newly developed approach for eliciting instance-based feature importance levels, and are also used to reduce the lengths of the rules to a maximum of 3 to 4 antecedent parts to ensure readability for operators and users. The proposed techniques were embedded within an annotation GUI and applied to a real-world application scenario from the field of visual inspection. The usefulness of the proposed linguistic explanations was evaluated based on experiments conducted with six operators. The results indicate that there is approximately an 80% chance that operator/user labeling behavior improves significantly when enhanced linguistic explanations are provided, whereas this chance drops to 10% when only the classifier responses are shown. ]]>
机译:<![cdata [ 抽象 在决策支持和分类系统中,通常必须需要运营商或专家为大量流程样本提供类标签,以便是能够建立可靠的机器学习分类器。由于在标签过程中,由于不同人类的经验和宪法,这种标签通常受到重大不确定性和不一致的影响。这通常导致显着的意外,类别重叠。我们提出了几种新概念,用于在语言(人类可读)形式中提供对分类器决策的增强解释。这些旨在帮助操作员更好地了解决策过程并在样本注释期间支持它们,以提高连续标记周期的确定性和一致性。这预计将导致更好,更一致的数据集(流)用于培训和更新分类器。增强的解释由(1)分类决策的接地原因组成,表示为语言可读性的模糊规则,(2)与其决定和可能的替代建议相关的分类器的不确定性水平,(3)当前样本的新颖性程度(4)输入特征对当前分类响应的影响水平。其中的最后一个是基于新开发的诱因基于斜体的方法特征重要性水平,并且还用于将规则的长度缩小为最多3到4个前进的零件以确保运营商和用户的可读性。所提出的技术嵌入在注释GUI中,并应用于从视野检查的真实应用场景。基于用六个运营商进行的实验评估了所提出的语言解释的有用性。结果表明,当提供增强的语言解释时,大约有80%的偶数运算符/用户标签行为显着提高,而当仅显示分类器响应时,此机会降至10%。 < / ce:抽象-sec> ]]>

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号