首页> 外文会议>Conference on Radar Sensor Technology >Application of multidomain data fusion, machine learning and feature learning paradigms towards enhanced image-based SAR class vehicle recognition
【24h】

Application of multidomain data fusion, machine learning and feature learning paradigms towards enhanced image-based SAR class vehicle recognition

机译:多域数据融合,机器学习和特征学习范式对基于图像的SAR级车辆识别的应用

获取原文

摘要

Deep convolutional neural networks (CNNs) provide the sensing and detection community with a discriminative, machine learning based approach for classifying images of objects. However, one of the largest limitations for deep CNN image classifiers is the need for extensive training data for a variety of appearances of class objects. While current methods such as GAN data augmentation, noise-perturbation, and rotation or translation of images can allow CNNs to better associate convolved features to ones similar to a learned image class, many fail to provide new context of ground truth information associated with each object class. To expand the association of new convolved feature examples with image classes within CNN training datasets. we propose a feature learning and training data enhancement paradigm via a multi-sensor domain data augmentation algorithm. This algorithm uses a mutual information, merit-based feature selection subroutine to iteratively select SAR object features that most correlate to each sensor domain's class image objects. It then re-augments these features into the opposite sensor domain's feature set via a highest mutual information, cross sensor domain image concatenation function. This augmented set then acts to retrain the CNN to recognize new cross domain class object features that each respective sensor domain's network was not previously exposed to. Our experimental results using T60-class vs T70-class SAR object images from both the MSTAR and MGTD dataset repositories demonstrated an increase in classification accuracy from 88% and 61% to post-augmented cross-domain dataset training of 93.75% accuracy for the MSTAR, MGTD and subsequent fused datasets, respectively.
机译:深度卷积神经网络(CNNS)为感测和检测界提供了一种判别的机器学习方法,用于对物体的图像进行分类。然而,深度CNN图像分类器的最大限制之一是需要对类对象的各种外观进行广泛的培训数据。虽然当前的方法如GaN数据增强,噪声扰动和旋转或图像转换,但可以允许CNNS更好地将卷积特征与类似于学习的图像类相似的兼容,许多人无法提供与每个对象相关联的地面真理信息的新上下文班级。在CNN训练数据集中的图像类中扩展新的卷绕功能示例的关联。我们通过多传感器域数据增强算法提出了一种特征学习和培训数据增强范式。该算法使用相互信息,基于Merit的特征选择子程序来迭代地选择与每个传感器域的类对象最相关的SAR对象特征。然后,它通过最高的互信息将这些特征重新扩大到相对的传感器域的特征中,交叉传感器域图像替换功能。然后,这种增强的集合动作重新训练CNN以识别每个相应的传感器域的网络未以前暴露于的新横域类对象特征。我们的实验结果来自MSTAR和MGTD数据集存储库的T60-Class VS T70-Class SAR对象图像展示了88%和61%的分类准确性增加,以增强跨域数据集培训为M​​STAR的准确性为93.75% ,MGTD和随后的融合数据集。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号