首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >Exploring Google Street View with deep learning for crop type mapping
【24h】

Exploring Google Street View with deep learning for crop type mapping

机译:探索谷歌街景,深入学习作物型映射

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Ground reference data are an essential prerequisite for supervised crop mapping. The lack of a low-cost and efficient ground referencing method results in pervasively limited reference data and hinders crop classification. In this study, we apply a convolutional neural network (CNN) model to explore the efficacy of automatic ground truthing via Google Street View (GSV) images in two distinct farming regions: Illinois and the Central Valley in California. We demonstrate the feasibility and reliability of our new ground referencing technique by performing pixel-based crop mapping at the state level using the cloud-based Google Earth Engine platform. The mapping results are evaluated using the United States Department of Agriculture (USDA) crop data layer (CDL) products. From similar to 130,000 GSV images, the CNN model identified similar to 9,400 target crop images. These images are well classified into crop types, including alfalfa, almond, corn, cotton, grape, rice, soybean, and pistachio. The overall GSV image classification accuracy is 92% for the Central Valley and 97% for Illinois. Subsequently, we shifted the image geographical coordinates 2-3 times in a certain direction to produce 31,829 crop reference points: 17,358 in Illinois, and 14,471 in the Central Valley. Evaluation of the mapping results with CDL products revealed satisfactory coherence. GSV-derived mapping results capture the general pattern of crop type distributions for 2011-2019. The overall agreement between CDL products and our mapping results is indicated by R-2 values of 0.44-0.99 for the Central Valley and 0.81-0.98 for Illinois. To show the applicational value of the proposed method in other countries, we further mapped rice paddy (2014-2018) in South Korea which yielded fairly well outcomes (R-2 = 0.91). These results indicate that GSV images used with a deep learning model offer an efficient and cost-effective alternative method for ground referencing, in many regions of the world.
机译:地面参考数据是监督作物映射的必要先决条件。缺乏低成本和有效的地面参考方法导致普遍存在的参考数据和妨碍作物分类。在这项研究中,我们应用了卷积神经网络(CNN)模型,探讨了两种不同的农业地区的谷歌街景(GSV)图像的自动磨削序列的功效:伊利诺伊州和加利福尼亚中央山谷。我们通过使用基于云的Google地球发动机平台在州级执行基于像素的裁剪映射来展示我们新的地面参考技术的可行性和可靠性。使用美国农业部(USDA)作物数据层(CDL)产品评估映射结果。从类似于130,000个GSV图像,CNN模型识别出类似于9,400个目标裁剪图像。这些图像分类为作物类型,包括苜蓿,杏仁,玉米,棉花,葡萄,米,大豆和开心果。中央山谷的整体GSV图像分类准确度为92%,伊利诺伊州的97%。随后,我们将图像地理坐标转移到某个方向上的2-3次,以在伊利诺伊州生产31,829作物参考点:17,358,中央山谷14,471。用CDL产品评估映射结果显示令人满意的相干性。 GSV衍生的映射结果捕获2011-2019的作物类型分布的一般模式。 CDL产品与映射结果之间的总体一致性由R-2值为0.44-0.99的中央山谷,伊利诺伊州0.81-0.98表示。为了展示其他国家拟议方法的累积价值,我们进一步映射了韩国稻米稻田(2014-2018),其出现了相当良好的结果(R-2 = 0.91)。这些结果表明,与深度学习模型一起使用的GSV图像提供了一种高效且经济高效的替代方法,用于地面参考,在世界许多地区。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号