首页> 中文期刊> 《农业工程学报》 >基于全卷积网络的哺乳母猪图像分割

基于全卷积网络的哺乳母猪图像分割

         

摘要

猪舍场景下,光照变化、母猪体表颜色不均及与环境颜色对比度不大、母猪与仔猪的粘连等,均给目标分割带来很大的困难.该文提出了基于全卷积网络(FCN,fully convolutional networks)的哺乳母猪图像分割算法.以VGG16为基础网络,采用融合深层抽象特征与浅层细节特征并将融合的特征图上采样8倍的跳跃式结构,设计哺乳母猪分割的FCN.利用Caffe深度学习框架,以7栏伴有不同日龄仔猪的3811幅哺乳母猪训练样本进行母猪分割FCN训练,在另外21栏的523幅哺乳母猪测试集上的分割结果表明:该算法可有效避免光线变化、母猪颜色不均、小猪遮挡与粘连等影响,实现完整的哺乳母猪区域分割;分割的平均准确率达到99.28%,平均区域重合度达到95.16%,平均速度达到0.22 s/幅.与深度卷积网络的SDS(simultaneous detection and segmentation)及传统的基于图论的图像分割、基于水平集的图像分割方法做了对比试验,该文分割方法平均区域重合度分别比这3种方法高出9.99、31.96和26.44个百分点,有较好的泛化性和鲁棒性,实现了猪舍场景下哺乳母猪准确、快速分割,可为猪只图像分割提供了技术参考.%The behaviors of a lactating sow reflect welfare and health that affect piglet survival and growth during lactation. Computer vision has been widely used to perceive the behavior of animals for precision husbandry, which is useful to increase the productivity and reduce the disease rate. Effective and accurate segmentation of individual lactating sow is a vital step to record and analyze the lactating sow behavior automatically. However, under real pigsty conditions, it is a challenge to segment lactating sow from the background due to occlusion, uneven color on sow body surface, variations of sow size and pose, varying illumination and complex floor status. In this paper, we proposed an algorithm for lactating sow image segmentation based on fully convolutional networks (FCN). To design FCN for accurate segmentation, VGG16 was chosen as a basic network where the fully connected lays were converted to convolutional layers, and the FCN-8s skip structure was designed by combining semantic information from a deep, coarse layer with appearance information from a shallow, fine layer. We called this network FCN-8s-VGG16. The steps of our work were as follows: First, top view images were taken from 28 pens of pigs under a real pigsty circumstance and a total of 4334 images were obtained, of which 3811 training images were selected from images of 7 pens and 523 test images were selected from images of the other 21 pens. And, all the images in training set and test set were manually labeled. Second, adaptive histogram equalization was used to improve contrast in training images. Then, the pre-processed training set was fed into FCN-8s-VGG16 to develop an optimum FCN model by the fine-tuning of the network parameters using Caffe deep learning framework on an NVIDIA GTX 980 GPU (graphics processing unit). After that, test set was put into the trained model to obtain the segmentation results. Then, to fill holes within objects and remove small objects, a post-processing was performed by using a disk structure of mathematical morphology and calculating the areas of connected regions. Finally, we compared our FCN-8s-VGG16 network architecture with different network architectures including a different skip architecture (FCN-16s based) and 2 different basic networks (CaffeNet based and AlexNet based). Besides, comparisons with other methods were also conducted, including the previous state-of-the-art simultaneous detection and segmentation (SDS), Graph-based and Level-set algorithm. The results on the test set showed that the algorithm achieved a complete segmentation of lactating sow by minimizing the effects of uneven color, light variations, occlusions, adhesion between sow and piglets and complex floor status, with an average accuracy of segmentation of 99.3% and a mean regional coincidence degree of 95.2% at an average speed of 0.22 second per image. However, it is hard to completely segment the sow's head when sow's head is downwards to floor, or close to the wall or adheres to piglets. The comparison with different network architectures showed that the mean regional coincidence degree of our proposed network architecture was higher than that of the others, and on GPU, the segmentation speeds of our FCN-8s-VGG16, FCN-16s based, CaffeNet based and AlexNet based were 0.22, 0.21, 0.09, and 0.09 second per image, respectively, which had good real-time performance. The comparison with other methods showed that our FCN-8s-VGG16 model outperformed others, which improved the mean regional coincidence degree of SDS, Graph-based and Level-set by 9.99, 31.96 and 26.44 percentage point, respectively. All of the experimental results suggest that the proposed method demonstrates a higher generalization and robustness, and provides an effective access to accurate and fast segmentation of lactating sow image under a pigsty circumstance.

著录项

  • 来源
    《农业工程学报》 |2017年第23期|219-225|共7页
  • 作者单位

    华南农业大学电子工程学院,广州 510642;

    广东省现代养猪数据化工程技术研究中心,广州 510642;

    华南农业大学电子工程学院,广州 510642;

    广东省现代养猪数据化工程技术研究中心,广州 510642;

    广东省智慧果园科技创新中心,广州 510642;

    广东省农情信息监测工程技术研究中心,广州 510642;

    华南农业大学工程学院,广州 510642;

    华南农业大学电子工程学院,广州 510642;

    广东省智慧果园科技创新中心,广州 510642;

    华南农业大学电子工程学院,广州 510642;

    广东省现代养猪数据化工程技术研究中心,广州 510642;

    华南农业大学电子工程学院,广州 510642;

    广东省现代养猪数据化工程技术研究中心,广州 510642;

    华南农业大学电子工程学院,广州 510642;

    华南农业大学电子工程学院,广州 510642;

    广东省农情信息监测工程技术研究中心,广州 510642;

    华南农业大学电子工程学院,广州 510642;

    广东省现代养猪数据化工程技术研究中心,广州 510642;

  • 原文格式 PDF
  • 正文语种 chi
  • 中图分类 信息处理(信息加工);
  • 关键词

    图像分割; 算法; 试验; 全卷积网络; 哺乳母猪;

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号