首页> 外文会议>Annual International Conference of the IEEE Engineering in Medicine and Biology Society >Real-Time Food Intake Monitoring Using Wearable Egocnetric Camera
【24h】

Real-Time Food Intake Monitoring Using Wearable Egocnetric Camera

机译:使用可穿戴式自拍相机实时监控食物摄入量

获取原文

摘要

With technological advancement, wearable egocentric camera systems have extensively been studied to develop food intake monitoring devices for the assessment of eating behavior. This paper provides a detailed description of the implementation of CNN based image classifier in the Cortex-M7 microcontroller. The proposed network classifies the captured images by the wearable egocentric camera as food and no food images in real-time. This real-time food image detection can potentially lead the monitoring devices to consume less power, less storage, and more user-friendly in terms of privacy by saving only images that are detected as food images. A derivative of pre-trained MobileNet is trained to detect food images from camera captured images. The proposed network needs 761.99KB of flash and 501.76KB of RAM to implement which is built for an optimal trade-off between accuracy, computational cost, and memory footprint considering implementation on a Cortex-M7 microcontroller. The image classifier achieved an average precision of 82%±3% and an average F-score of 74%±2% while testing on 15343 (2127 food images and 13216 no food images) images of five full days collected from five participants.
机译:随着技术的进步,可穿戴式以自我为中心的相机系统已得到广泛研究,以开发用于评估饮食行为的食物摄入监测设备。本文详细介绍了在Cortex-M7微控制器中基于CNN的图像分类器的实现。所提出的网络将可穿戴式以自我为中心的摄像头捕获的图像实时分类为食物,而没有食物图像。通过仅保存被检测为食物图像的图像,这种实时食物图像检测可以潜在地导致监视设备消耗更少的功率,更少的存储空间,并且在隐私方面更加用户友好。训练有素的MobileNet派生工具可以从相机捕获的图像中检测食物图像。拟议的网络需要761.99KB的闪存和501.76KB的RAM才能实现,考虑到在Cortex-M7微控制器上的实现,该网络旨在在精度,计算成本和内存占用量之间实现最佳折衷。图像分类器在从五位参与者收集的五天的15343张图像(2127张食物图像和13216张无食物图像)上进行测试时,获得了82%±3%的平均精度和74%±2%的平均F分数。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号