首页> 外文期刊>Image Processing, IET >Efficient inception V2 based deep convolutional neural network for real-time hand action recognition
【24h】

Efficient inception V2 based deep convolutional neural network for real-time hand action recognition

机译:基于高效的Inception V2基于深度卷积神经网络进行实时手动识别

获取原文
获取原文并翻译 | 示例

摘要

The most effective and accurate deep convolutional neural network (faster region-based convolutional neural network (Faster R-CNN) Inception V2 model, single shot detector (SSD) Inception V2 model) based architectures for real-time hand gesture recognition is proposed. The proposed models are tested on standard data sets (NUS hand posture data set-II, Senz-3D) and custom-developed (MITI hand data set (MITI-HD)) data set. The performance metrics are analysed for intersection over union (IoU) ranges between 0.5 and 0.95. IoU value of 0.5 resulted in higher precision compared to other IoU values considered (0.5:0.95, 0.75). It is observed that the Faster R-CNN Inception V2 model resulted in higher precision (0.990 for AP(all), IoU = 0.5) compared to SSD Inception V2 model (0.984 for (all)) for MITI-HD 160. The computation time of Faster R-CNN Inception V2 is higher compared to SSD Inception V2 model and also resulted in less number of mispredictions. Increasing the size of samples (MITI-HD 300) resulted in improvement of AP(all) = 0.991. Improvement in large (APlarge) and medium (APmedium) size detections are not significant when compared to small (APsmall) detections. It is concluded that the Faster R-CNN Inception V2 model is highly suitable for real-time hand gesture recognition system under unconstrained environments.
机译:提出了最有效和准确的深度卷积神经网络(基于区域的速度较快的卷积神经网络(更快的R-CNN)成立V2型号,用于实时手势识别的基于实时手势识别的架构。所提出的模型在标准数据集(NUS手部姿势数据集-II,SENZ-3D)和定制(MITI手数据集(MITI-HD))数据集上进行测试。分析了性能指标以进行联盟(IOU)范围0.5和0.95的交叉口。与所考虑的其他IOU值相比,IOO值为0.5导致更高的精度(0.5:0.95,0.75)。观察到,与SSD Inception V2型号相比,更快的R-CNN Inception V2模型(AP(全部),iou = 0.5)导致更高的精度(0.990,iou = 0.5),用于Miti-HD 160.计算时间与SSD Inception V2模型相比,R-CNN Inception V2的速度较高,并且也导致少量错误预测。增加样品的大小(MITI-HD 300)导致AP(全部)= 0.991的改善。与小(APSMALL)检测相比,大(APLARGE)和培养基(APMEDIUM)尺寸检测的改善不显着。得出结论,较快的R-CNN Inceptive V2模型非常适合在不受约束环境下的实时手势识别系统。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号