首页> 外文期刊>Journal of intelligent & fuzzy systems: Applications in Engineering and Technology >A hierarchical coarse-to-fine perception for small-target categorization of butterflies under complex backgrounds
【24h】

A hierarchical coarse-to-fine perception for small-target categorization of butterflies under complex backgrounds

机译:复杂背景下蝴蝶小目标分类的分层粗对细节

获取原文
获取原文并翻译 | 示例
       

摘要

Small-target categorization of butterflies suffers from large-scale search space of candidate target locations, subtler discriminations, camouflaged appearances, and complex backgrounds. Precise localization and domain-specific discrimination extraction are crucial for this issue. In this work, a novel hierarchical coarse-to-fine convolutional neural network (C-t-FCNN) was proposed. It consists of CoarseNet and FineNet, which incorporate object-level and part-level representations into framework. Specifically, the coarse-grained features containing the orientation description are generated by CoarseNet, while the fine-grained discriminations with semantic distinctiveness are captured by FineNet. Next, the correspondences are established to mark the target regions, background regions, and mismatched regions depending on the quantification of scale-invariant feature transform (SIFT) descriptors. Then, the features are subsampled via spatial pyramid pooling (SPP) for size uniformity and integration. Finally, the irrelevant background and mismatched regions are eliminated by the support vector machine (SVM) with a radial basis function (RBF) kernel, leaving only the target-specific patches for finer-scale extraction. Hence the numeracy can be economized from identifying irrelevant areas and can be rescheduled in feature extraction and final decision, which can suppress time complexity simultaneously. A total of 119,016 augmented butterfly images spanning 47 categories are utilized for model training, while 13,734 images are evaluated for effectiveness verification. The C-t-FCNN delivers impressive performance, i.e., it achieves a validation accuracy of 92.08% and a testing accuracy of 91.6%, which outperforms state-of-the-arts.
机译:小型目标分类蝴蝶的大规模搜索空间占候选目标地点,子专题辨别,伪装出口和复杂背景。精确的定位和特定于域的歧视提取对于这个问题至关重要。在这项工作中,提出了一种新的分层粗型卷积神经网络(C-T-FCNN)。它由CoarseNet和Finenet组成,它将对象级和部分级别表示纳入框架。具体地,含有取向描述的粗粒细粒特征是由CoarseNet产生的,而Minenet捕获具有语义独特性的细粒度鉴别。接下来,根据规模不变特征变换(SIFT)描述符的量化,建立对应地标记目标区域,背景区域和错配区域。然后,该特征通过空间金字塔池(SPP)来取样,以尺寸均匀性和集成。最后,通过径向基函数(RBF)内核的支持向量机(SVM)消除了无关的背景和错配区域,仅留下针对细尺提取的目标特定斑块。因此,可以从识别不相关的区域节省数值,并且可以在特征提取和最终决定中重新安排,这可以同时抑制时间复杂性。共有119,016个增强的蝴蝶图像,用于模型培训,用于模型训练,而有效验证评估13,734个图像。 C-T-FCNN提供令人印象深刻的性能,即,它实现了92.08%的验证准确性,测试精度为91.6%,这优于最先进的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号