...
首页> 外文期刊>IEEE Geoscience and Remote Sensing Letters >Deep Neural Network Initialization Methods for Micro-Doppler Classification With Low Training Sample Support
【24h】

Deep Neural Network Initialization Methods for Micro-Doppler Classification With Low Training Sample Support

机译:低训练样本支持的微多普勒分类的深度神经网络初始化方法

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Deep neural networks (DNNs) require large-scale labeled data sets to prevent overfitting while having good generalization. In radar applications, however, acquiring a measured data set of the order of thousands is challenging due to constraints on manpower, cost, and other resources. In this letter, the efficacy of two neural network initialization techniques-unsupervised pretraining and transfer learning-for dealing with training DNNs on small data sets is compared. Unsupervised pretraining is implemented through the design of a convolutional autoencoder (CAE), while transfer learning from two popular convolutional neural network architectures (VGGNet and GoogleNet) is used to augment measured RF data for training. A 12-class problem for discrimination of micro-Doppler signatures for indoor human activities is utilized to analyze activation maps, bottleneck features, class model, and classification accuracy with respect to training sample size. Results show that on meager data sets, transfer learning outperforms unsupervised pretraining and random initialization by 10% and 25%, respectively, but that when the sample size exceeds 650, unsupervised pretraining surpasses transfer learning and random initialization by 5% and 10%, respectively. Visualization of activation layers and learned models reveals how the CAE succeeds in representing the micro-Doppler signature.
机译:深度神经网络(DNN)需要大规模的标记数据集,以防止过度拟合,同时具有良好的概括性。但是,在雷达应用中,由于人力,成本和其他资源的限制,获取数千个测量数据集具有挑战性。在这封信中,比较了两种神经网络初始化技术(无监督的预训练和转移学习)处理小数据集上的训练DNN的功效。无监督的预训练是通过卷积自动编码器(CAE)的设计实现的,而从两种流行的卷积神经网络体系结构(VGGNet和GoogleNet)进行的转移学习可用于增强测量的RF数据进行训练。针对室内人类活动的微多普勒信号识别的一个12类问题用于分析激活图,瓶颈特征,类模型和关于训练样本量的分类准确性。结果表明,在微薄的数据集上,迁移学习的性能分别比无监督的预训练和随机初始化分别高10%和25%,但是当样本量超过650时,无监督的预训练分别比迁移学习和随机初始化分别高5%和10%。 。激活层和学习模型的可视化揭示了CAE如何成功地表示微多普勒信号。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号