首页> 外文期刊>Journal of electrical and computer engineering >Convolutional Recurrent Neural Networks for Observation-Centered Plant Identification
【24h】

Convolutional Recurrent Neural Networks for Observation-Centered Plant Identification

机译:卷积递归神经网络用于以观察为中心的植物识别

获取原文
获取原文并翻译 | 示例
       

摘要

Traditional image-centered methods of plant identification could be confused due to various views, uneven illuminations, and growth cycles. To tolerate the significant intraclass variances, the convolutional recurrent neural networks (C-RNNs) are proposed for observation-centered plant identification to mimic human behaviors. The C-RNN model is composed of two components: the convolutional neural network (CNN) backbone is used as a feature extractor for images, and the recurrent neural network (RNN) units are built to synthesize multiview features from each image for final prediction. Extensive experiments are conducted to explore the best combination of CNN and RNN. All models are trained end-to-end with 1 to 3 plant images of the same observation by truncated back propagation through time. The experiments demonstrate that the combination of MobileNet and Gated Recurrent Unit (GRU) is the best trade-off of classification accuracy and computational overhead on the Flavia dataset. On the holdout test set, the mean 10-fold accuracy with 1, 2, and 3 input leaves reached 99.53%, 100.00%, and 100.00%, respectively. On the BJFU100 dataset, the C-RNN model achieves the classification rate of 99.65% by two-stage end-to-end training. The observation-centered method based on the C-RNNs shows potential to further improve plant identification accuracy.
机译:传统的以图像为中心的植物识别方法可能因各种视图,照明不均匀和生长周期而混淆。为了忍受显着的类内差异,提出了卷积递归神经网络(C-RNN),以观察为中心的植物识别来模仿人类行为。 C-RNN模型由两个部分组成:卷积神经网络(CNN)骨架用作图像的特征提取器,而递归神经网络(RNN)单元用于从每个图像合成多视图特征以进行最终预测。进行了广泛的实验以探索CNN和RNN的最佳组合。通过截断时间的反向传播,对所有模型进行1到3个具有相同观察结果的植物图像的端到端训练。实验表明,MobileNet和门控循环单元(GRU)的组合是Flavia数据集上分类精度和计算开销的最佳折衷方案。在保持测试集上,输入1、2和3个输入叶子的平均10倍精度分别达到99.53%,100.00%和100.00%。在BJFU100数据集上,C-RNN模型通过两阶段的端到端训练达到了99.65%的分类率。基于C-RNN的以观察为中心的方法显示了进一步提高植物识别准确性的潜力。

著录项

  • 来源
    《Journal of electrical and computer engineering》 |2018年第1期|9373210.1-9373210.7|共7页
  • 作者单位

    School of Information Science and Technology, Beijing Forestry University, Beijing 100083, China;

    School of Information Science and Technology, Beijing Forestry University, Beijing 100083, China;

    School of Information Science and Technology, Beijing Forestry University, Beijing 100083, China;

    School of Information Science and Technology, Beijing Forestry University, Beijing 100083, China;

    School of Information Science and Technology, Beijing Forestry University, Beijing 100083, China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

  • 入库时间 2022-08-18 03:54:51

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号