首页> 美国卫生研究院文献>other >Can the artificial intelligence technique of reinforcement learning use continuously-monitored digital data to optimize treatment for weight loss?
【2h】

Can the artificial intelligence technique of reinforcement learning use continuously-monitored digital data to optimize treatment for weight loss?

机译:强化学习的人工智能技术可以使用连续监控的数字数据来优化减肥方法吗?

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Behavioral weight loss (WL) trials show that, on average, participants regain lost weight unless provided long-term, intensive—and thus costly—intervention. Optimization solutions have shown mixed success. The artificial intelligence principle of “reinforcement learning” (RL) offers a new and more sophisticated form of optimization in which the intensity of each individual’s intervention is continuously adjusted depending on patterns of response. In this pilot, we evaluated the feasibility and acceptability of a RL-based WL intervention, and whether optimization would achieve equivalent benefit at a reduced cost compared to a non-optimized intensive intervention. Participants (n = 52) completed a 1-month, group-based in-person behavioral WL intervention and then (in Phase II) were randomly assigned to receive 3 months of twice-weekly remote interventions that were non-optimized (NO; 10-min phone calls) or optimized (a combination of phone calls, text exchanges, and automated messages selected by an algorithm). The Individually-Optimized (IO) and Group-Optimized (GO) algorithms selected interventions based on past performance of each intervention for each participant, and for each group member that fit into a fixed amount of time (e.g., 1 h), respectively. Results indicated that the system was feasible to deploy and acceptable to participants and coaches. As hypothesized, we were able to achieve equivalent Phase II weight losses (NO = 4.42%, IO = 4.56%, GO = 4.39%) at roughly one-third the cost (1.73 and 1.77 coaching hours/participant for IO and GO, versus 4.38 for NO), indicating strong promise for a RL system approach to weight loss and maintenance.
机译:行为减肥(WL)试验表明,平均而言,除非提供长期,密集的(因而费用高昂的)干预措施,否则参与者会恢复减肥。优化解决方案显示出喜忧参半的结果。 “强化学习”(RL)的人工智能原理提供了一种新的,更复杂的优化形式,其中,每个人的干预强度均根据响应方式进行连续调整。在此试验中,我们评估了基于RL的WL干预的可行性和可接受性,以及与未优化的强化干预相比,优化是否以降低的成本实现了相同的收益。参与者(n = 52)完成了一个为期1个月的基于小组的面对面行为WL干预,然后(在第二阶段)被随机分配为接受3个月的,未经优化的每周两次的远程干预(否; 10) -分钟通话)或已优化(通话,文本交换和算法选择的自动消息的组合)。个别最佳化(IO)和群组最佳化(GO)算法是根据每个干预措施的过去表现为每个参与者和适合固定时间段(例如1小时)的每个小组成员选择干预措施的。结果表明该系统是可行的部署,并为参与者和教练所接受。如假设的那样,我们能够以大约三分之一的成本(IO和GO的培训时间/参与者为1.73和1.77辅导时间/人)实现等效的II期减肥(NO = 4.42%,IO = 4.56%,GO = 4.39%) 4.38(否),表示采用RL系统的减肥和保养方法很有希望。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号