【24h】

AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models

机译:AllenNLP解释:解释NLP模型预测的框架

获取原文

摘要

Neural NLP models are increasingly accurate but are imperfect and opaque—they break in counterintuitive ways and leave end users puzzled at their behavior. Model interpretation methods ameliorate this opacity by providing explanations for specific model predictions. Unfortunately, existing interpretation codebases make it difficult to apply these methods to new models and tasks, which hinders adoption for practitioners and burdens in-terpretability researchers. We introduce AllenNLP Interpret, a flexible framework for interpreting NLP models. The toolkit provides interpretation primitives (e.g., input gradients) for any AllenNLP model and task, a suite of built-in interpretation methods, and a library of front-end visualization components. We demonstrate the toolkit's flexibility and utility by implementing live demos for five interpretation methods (e.g., saliency maps and adversarial attacks) on a variety of models and tasks (e.g., masked language modeling using BERT and reading comprehension using BiDAF). These demos, alongside our code and tutorials, are available at https://allennlp. org/interpret.
机译:神经NLP模型越来越准确,但不完美且不透明-它们以违反直觉的方式破坏,并使最终用户对其行为感到困惑。模型解释方法通过提供对特定模型预测的解释来改善这种不透明性。不幸的是,现有的解释代码库很难将这些方法应用于新的模型和任务,这阻碍了从业人员的采用,并给可解释性研究人员带来了负担。我们介绍AllenNLP Interpret,这是一个用于解释NLP模型的灵活框架。该工具包为任何AllenNLP模型和任务提供了解释原语(例如输入梯度),一套内置的解释方法以及前端可视化组件库。我们通过在各种模型和任务(例如,使用BERT的掩盖语言建模和使用BiDAF的阅读理解)上针对五种解释方法(例如显着性图和对抗性攻击)实施实时演示来证明该工具包的灵活性和实用性。这些演示以及我们的代码和教程可在https:// allennlp上获得。组织/解释。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号