首页> 外文期刊>Engineering Applications of Artificial Intelligence >Interpretable machine learning approaches to prediction of chronic homelessness
【24h】

Interpretable machine learning approaches to prediction of chronic homelessness

机译:可解释的机器学习方法预测慢性无家可归

获取原文
获取原文并翻译 | 示例
           

摘要

We introduce a machine learning approach to predict chronic homelessness from de-identified client shelter records drawn from a commonly used Canadian homelessness management information system. Using a 30-day time step, a time series dataset for 6521 individuals was generated, consisting of static features for client attributes and dynamic features describing shelter service usage over time. Five candidate models were trained to predict whether a client will be in a state of chronic homelessness 6 months in the future. The training method was fine-tuned to achieve a high F1-score, with a desired balance between recall and precision, in favour of recall. Mean, recall and precision across 10-fold cross validation were above 0.9 and 0.6 respectively for three out of the five candidate models. An interpretability method was applied to explain individual predictions and gain insight into the overall factors contributing to chronic homelessness among the population studied. This study demonstrates that it is possible to achieve state-of-the-art performance and improved stakeholder trust of what are usually "black box" machine learning models using an interpretability algorithm.
机译:我们介绍了一种机器学习方法,以预测从常用的加拿大无家可归管理信息系统绘制的去识别客户庇护所记录中的慢性无家可归。使用30天的时间步骤,生成了6521个个体的时间序列数据集,包括用于客户端属性的静态功能和用于随着时间的推移描述庇护服务使用的动态功能。培训五种候选模型,以预测客户在未来6个月的慢性无家可归者的状态。训练方法进行微调,以实现高F1分数,召回和精度之间的平衡,赞成召回。对于五个候选模型中的三个,分别为10倍交叉验证的平均值,召回和精度分别超过0.9和0.6。应用解释方法来解释个人预测,并进入研究人口中有助于慢性无家可归的总体因素。本研究表明,使用可解释性算法,可以实现最先进的性能和改进的利益相关者信任,通常是“黑匣子”机器学习模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号