首页> 外文期刊>Computers and Electronics in Agriculture >Automatically detecting pig position and posture by 2D camera imaging and deep learning
【24h】

Automatically detecting pig position and posture by 2D camera imaging and deep learning

机译:通过2D摄像头成像和深度学习自动检测猪位置和姿势

获取原文
获取原文并翻译 | 示例
           

摘要

Prior livestock research provides evidence for the importance of accurate detection of pig positions and postures for better understanding animal welfare. Position and posture detection can be accomplished by machine vision systems. However, current machine vision systems require rigid setups of fixed vertical lighting, vertical top-view camera perspectives or complex camera systems, which hinder their adoption in practice. Moreover, existing detection systems focus on specific pen contexts and may be difficult to apply in other livestock facilities. Our main contribution is twofold: First, we design a deep learning system for position and posture detection that only requires standard 2D camera imaging with no adaptations to the application setting. This deep learning system applies the state-of-the-art Faster R-CNN object detection pipeline and the state-of-the-art Neural Architecture Search (NAS) base network for feature extraction. Second, we provide a labelled open access dataset with 7277 human-made annotations from 21 standard 2D cameras, covering 31 different one-hour long video recordings and 18 different pens to train and test the approach under realistic conditions. On unseen pens under similar experimental conditions with sufficient similar training images of pig fattening, the deep learning system detects pig position with an Average Precision (AP) of 87.4%, and pig position and posture with a mean Average Precision (mAP) of 80.2%. Given different and more difficult experimental conditions of pig rearing with no or little similar images in the training set, an AP of over 67.7% was achieved for position detection. However, detecting the position and posture achieved a mAP between 44.8% and 58.8% only. Furthermore, we demonstrate exemplary applications that can aid pen design by visualizing where pigs are lying and how their lying behavior changes through the day. Finally, we contribute open data that can be used for further studies, replication, and pig position detection applications.
机译:先前的牲畜研究提供了准确地检测猪位置和姿势的重要性,以便更好地了解动物福利。位置和姿势检测可以通过机器视觉系统完成。然而,当前机器视觉系统需要固定垂直照明的刚性设置,垂直俯视摄像头或复杂的相机系统,这阻碍了他们在实践中的采用。此外,现有的检测系统专注于特定的笔上下文,并且可能难以应用于其他牲畜设施。我们的主要贡献是双重的:首先,我们设计了一个深度学习系统,用于位置和姿势检测,只需要标准的2D摄像头成像,没有适应应用程序设置。这种深度学习系统适用于最先进的R-CNN对象检测管道和用于特征提取的最先进的神经结构搜索(NAS)基础网络。其次,我们提供标有7277个人造的注释的标签开放式访问数据集,来自21个标准2D摄像机,涵盖了31个不同的一小时长视程录制,18个不同的笔来培训和测试现实条件下的方法。在看不见的笔在类似的实验条件下具有足够类似的养猪育肥的训练图像,深入学习系统检测猪位置,平均精度(AP)为87.4%,猪位置和平均平均精度(MAP)的姿势和姿势为80.2% 。给定培训组中没有或少量类似的图像的猪饲养的不同和更困难的实验条件,以达到67.7%的AP进行位置检测。然而,检测位置和姿势才能达到44.8%和58.8%之间的地图。此外,我们展示了可以通过可视化猪在撒谎的位置以及他们的撒谎行为如何通过当天改变时援助笔设计的示例性应用。最后,我们贡献可用于进一步研究,复制和猪位置检测应用的开放数据。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号