首页> 外文期刊>JMIR Medical Informatics >Depression Risk Prediction for Chinese Microblogs via Deep-Learning Methods: Content Analysis
【24h】

Depression Risk Prediction for Chinese Microblogs via Deep-Learning Methods: Content Analysis

机译:通过深学习方法对中国微博风险预测的抑郁风险预测:内容分析

获取原文
       

摘要

Background Depression is a serious personal and public mental health problem. Self-reporting is the main method used to diagnose depression and to determine the severity of depression. However, it is not easy to discover patients with depression owing to feelings of shame in disclosing or discussing their mental health conditions with others. Moreover, self-reporting is time-consuming, and usually leads to missing a certain number of cases. Therefore, automatic discovery of patients with depression from other sources such as social media has been attracting increasing attention. Social media, as one of the most important daily communication systems, connects large quantities of people, including individuals with depression, and provides a channel to discover patients with depression. In this study, we investigated deep-learning methods for depression risk prediction using data from Chinese microblogs, which have potential to discover more patients with depression and to trace their mental health conditions. Objective The aim of this study was to explore the potential of state-of-the-art deep-learning methods on depression risk prediction from Chinese microblogs. Methods Deep-learning methods with pretrained language representation models, including bidirectional encoder representations from transformers (BERT), robustly optimized BERT pretraining approach (RoBERTa), and generalized autoregressive pretraining for language understanding (XLNET), were investigated for depression risk prediction, and were compared with previous methods on a manually annotated benchmark dataset. Depression risk was assessed at four levels from 0 to 3, where 0, 1, 2, and 3 denote no inclination, and mild, moderate, and severe depression risk, respectively. The dataset was collected from the Chinese microblog Weibo. We also compared different deep-learning methods with pretrained language representation models in two settings: (1) publicly released pretrained language representation models, and (2) language representation models further pretrained on a large-scale unlabeled dataset collected from Weibo. Precision, recall, and F1 scores were used as performance evaluation measures. Results Among the three deep-learning methods, BERT achieved the best performance with a microaveraged F1 score of 0.856. RoBERTa achieved the best performance with a macroaveraged F1 score of 0.424 on depression risk at levels 1, 2, and 3, which represents a new benchmark result on the dataset. The further pretrained language representation models demonstrated improvement over publicly released prediction models. Conclusions We applied deep-learning methods with pretrained language representation models to automatically predict depression risk using data from Chinese microblogs. The experimental results showed that the deep-learning methods performed better than previous methods, and have greater potential to discover patients with depression and to trace their mental health conditions.
机译:背景抑郁是一个严重的个人和公共心理健康问题。自我报告是用于诊断抑郁症的主要方法,并确定抑郁症的严重程度。然而,由于披露或与他人讨论他们的心理健康状况,不容易发现患有抑郁症的患者。此外,自我报告是耗时的,通常导致缺少一定数量的情况。因此,自动发现患有社交媒体等其他来源的抑郁症的患者一直吸引了越来越关注。作为最重要的日常通信系统之一,社交媒体连接大量的人,包括具有抑郁症的个体,并提供探索抑郁患者的渠道。在这项研究中,我们研究了使用来自中国微博数据的抑郁症风险预测的深度学习方法,这些方法有可能发现更多抑郁患者和追踪其心理健康状况的患者。目的本研究的目的是探讨艺术最先进的深度学习方法的潜在抑郁症风险预测来自中国微博的潜力。方法采用预读语言表示模型的深度学习方法,包括来自变压器(BERT)的双向编码器表示,鲁棒优化的伯特预先预订方法(Roberta)以及语言理解(XLNET)的广义自归预测,用于抑郁风险预测,并且是与之前的手动注释的基准数据集相比。在0至3的四个水平评估抑郁风险,其中0,1,2和3分别表示没有倾向,轻度,中度和严重的抑郁风险。数据集从中国微博微博收集。我们还将不同的深度学习方法与两种设置中的预先训练语言表示模型进行了比较:(1)公开发布的预训练语言表示模型,(2)语言表示模型进一步预订的大规模未标记的数据集预订。精确,召回和F1分数用作性能评估措施。结果三种深度学习方法中,BERT实现了最佳性能,微观的F1得分为0.856。 Roberta在第1,2和3级的抑郁风险上实现了最佳性能,宏观检测到0.424的抑郁风险,这代表了数据集上的新基准结果。进一步的预制语言表示模型展示了公开释放的预测模型的改进。结论我们应用了预借预示的语言表示模型的深度学习方法,以使用来自中国微博的数据来自动预测抑郁症风险。实验结果表明,深度学习方法比以前的方法更好,并具有更大的潜力来发现抑郁患者和追踪其心理健康状况。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号