...
首页> 外文期刊>JMLR: Workshop and Conference Proceedings >Learning Interpretable Models using Soft Integrity Constraints
【24h】

Learning Interpretable Models using Soft Integrity Constraints

机译:使用软完整约束学习可解释模型

获取原文
           

摘要

Integer models are of particular interest for applications where predictive models are supposed not only to be accurate but also interpretable to human experts. We introduce a novel penalty term called Facets whose primary goal is to favour integer weights. Our theoretical results illustrate the behaviour of the proposed penalty term: for small enough weights, the Facets matches the L1 penalty norm, and as the weights grow, it approaches the L2 regulariser. We provide the proximal operator associated with the proposed penalty term, so that the regularised empirical risk minimiser can be computed efficiently. We also introduce the Strongly Convex Facets, and discuss its theoretical properties. Our numerical results show that while achieving the state-of-the-art accuracy, optimisation of a loss function penalised by the proposed Facets penalty term leads to a model with a significant number of integer weights.
机译:对于预测模型所谓的应用程序不仅准确而且可以解释对人类专家来说,整数模型特别感兴趣。我们介绍了一个名为小型的新的惩罚术语,其主要目标是有利于整数重量。我们的理论结果说明了所提出的罚款项的行为:对于足够小的重量,小平面与L1罚款规范相匹配,随着重量生长,它接近L2符号机构。我们提供与拟议的罚款术语相关的近端运营商,以便可以有效地计算正则化经验风险最低限度。我们还介绍了强凸面的方面,并讨论了其理论属性。我们的数值结果表明,同时实现最先进的准确性,通过所提出的方面惩罚术语惩罚的损失函数的优化导致具有大量整数重量的模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号