首页> 外文期刊>Computers and Electronics in Agriculture >Data synthesis methods for semantic segmentation in agriculture: A ce:italic>Capsicum annuum/ce:italic> dataset
【24h】

Data synthesis methods for semantic segmentation in agriculture: A ce:italic>Capsicum annuum/ce:italic> dataset

机译:农业中语义分割的数据合成方法:A< CE:斜体>胶囊Annuum& / ce:斜体>数据集

获取原文
获取原文并翻译 | 示例
           

摘要

This paper provides synthesis methods for large-scale semantic image segmentation datasets of agricultural scenes with the objective to bridge the gap between state-of-the art computer vision performance and that of computer vision in the agricultural robotics domain. We propose a novel methodology to generate renders of random meshes of plants based on empirical measurements, including the automated generation per-pixel class and depth labels for multiple plant parts. A running example is given ofCapsicum annuum(sweet or bell pepper) in a high-tech greenhouse. A synthetic dataset of 10,500 images was rendered through Blender, using scenes with 42 procedurally generated plant models with randomised plant parameters. These parameters were based on 21 empirically measured plant properties at 115 positions on 15 plant stems. Fruit models were obtained by 3D scanning and plant part textures were gathered photographically. As reference dataset for modelling and evaluate segmentation performance, 750 empirical images of 50 plants were collected in a greenhouse from multiple angles and distances using image acquisition hardware of a sweet pepper harvest robot prototype. We hypothesised high similarity between synthetic images and empirical images, which we showed by analysing and comparing both sets qualitatively and quantitatively. The sets and models are publicly released with the intention to allow performance comparisons between agricultural computer vision methods, to obtain feedback for modelling improvements and to gain further validations on usability of synthetic bootstrapping and empirical fine-tuning. Finally, we provide a brief perspective on our hypothesis that related synthetic dataset bootstrapping and empirical fine-tuning can be used for improved learning.
机译:本文为农业场景大规模语义图像分割数据集提供了合成方法,目的是弥合最先进的计算机视觉性能与农业机器人域中计算机视觉之间的差距。我们提出了一种新的方法,可以基于经验测量来生成植物的随机网格的呈现,包括用于多个植物部件的自动化生成每像素类和深度标签。在高科技温室中给出了一个跑步的例子。通过搅拌机呈现10,500图像的合成数据集,使用具有42个程序生成的工厂模型的场景,具有随机植物参数。这些参数基于21个经验测量的植物性质,在15个植物茎上的115个位置。水果模型是通过3D扫描获得的,植物部分纹理摄影地聚集。作为用于建模和评估分割性能的参考数据集,使用甜椒收获机器人原型的图像采集硬件,在温室中收集750个植物的750个经验图像。我们假设合成图像与经验图像之间的高相似性,我们通过定性和定量地分析和比较这两个集合。该组和模型被公开发布,打算允许农业计算机视觉方法之间的性能比较,以获得建模改进的反馈,并进一步验证合成自动启动和经验微调的可用性。最后,我们对我们的假设提供了简短的观点,即相关合成数据集自动启动和经验微调可用于改进学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号