首页> 外文OA文献 >Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment
【2h】

Validation of Sensor-Based Food Intake Detection by Multicamera Video Observation in an Unconstrained Environment

机译:在不受约束环境中通过多轨视频观察验证传感器的食物摄入检测

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。
获取外文期刊封面目录资料

摘要

Video observations have been widely used for providing ground truth for wearable systems for monitoring food intake in controlled laboratory conditions; however, video observation requires participants be confined to a defined space. The purpose of this analysis was to test an alternative approach for establishing activity types and food intake bouts in a relatively unconstrained environment. The accuracy of a wearable system for assessing food intake was compared with that from video observation, and inter-rater reliability of annotation was also evaluated. Forty participants were enrolled. Multiple participants were simultaneously monitored in a 4-bedroom apartment using six cameras for three days each. Participants could leave the apartment overnight and for short periods of time during the day, during which time monitoring did not take place. A wearable system (Automatic Ingestion Monitor, AIM) was used to detect and monitor participants’ food intake at a resolution of 30 s using a neural network classifier. Two different food intake detection models were tested, one trained on the data from an earlier study and the other on current study data using leave-one-out cross validation. Three trained human raters annotated the videos for major activities of daily living including eating, drinking, resting, walking, and talking. They further annotated individual bites and chewing bouts for each food intake bout. Results for inter-rater reliability showed that, for activity annotation, the raters achieved an average (±standard deviation (STD)) kappa value of 0.74 (±0.02) and for food intake annotation the average kappa (Light’s kappa) of 0.82 (±0.04). Validity results showed that AIM food intake detection matched human video-annotated food intake with a kappa of 0.77 (±0.10) and 0.78 (±0.12) for activity annotation and for food intake bout annotation, respectively. Results of one-way ANOVA suggest that there are no statistically significant differences among the average eating duration estimated from raters’ annotations and AIM predictions (p-value = 0.19). These results suggest that the AIM provides accuracy comparable to video observation and may be used to reliably detect food intake in multi-day observational studies.
机译:视频观察已被广泛用于为可穿戴系统提供地面真理,用于监测受控实验室条件中的食物摄入量;然而,视频观察要求参与者将参与者限制在一个明确的空间中。该分析的目的是测试一种替代方法,用于在相对不受约束的环境中建立活动类型和食物摄入偏出。将用于评估食物摄入的可穿戴系统的准确性与视频观察的比较,并且还评估了注释的帧间性可靠性。注册了四十名参与者。使用六个摄像机在一间4卧室公寓中同时监测多个参与者,每个参与者每隔三天都有三天。参与者可以在一夜之间留下公寓,并且在白天短时间内,在此期间监测没有发生。可穿戴系统(自动摄取监视器,AIM)用于使用神经网络分类器以30秒的分辨率检测和监测参与者的食物摄入量。测试了两种不同的食物进口检测模型,从早期研究中训练了数据,并使用休假交叉验证在当前研究数据上培训。三名训练有素的人类评估者向每日生活的主要活动提供注释视频,包括饮食,饮酒,休息,走路和谈话。他们进一步注释了每种食物摄入量的个体叮咬和咀嚼伴侣。帧间间可靠性的结果表明,对于活动注释,评估者达到平均(±标准偏差(STD))κ值0.74(±0.02)和食物摄入注释0.82的平均kappa(光的κ)(± 0.04)。有效性结果表明,AIM进气量检测匹配的人体视频注释的食物摄入量为0.77(±0.10)和0.78(±0.12),分别用于分别用于食物摄入BOUT注释。单向ANOVA的结果表明,从评级的注释和瞄准预测估计的平均饮食持续时间内没有统计学意义差异(P值= 0.19)。这些结果表明,目的提供与视频观察相当的准确性,并且可用于可靠地检测多日观察研究中的食物摄入量。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号