首页> 外文会议>IEEE International Conference on Advanced Video and Signal Based Surveillance >Adaptive Control of Camera Modality with Deep Neural Network-Based Feedback for Efficient Object Tracking
【24h】

Adaptive Control of Camera Modality with Deep Neural Network-Based Feedback for Efficient Object Tracking

机译:基于深度神经网络的反馈的相机模态自适应控制,可实现高效的目标跟踪

获取原文

摘要

Round-the-clock surveillance requires robust object detection and tracking independent of lighting conditions. Fusing information from visual-infrared object detection network pair at feature level or decision level shows promising accuracy. However, such fused object detection network is not suitable for edge devices with limited processing power and memory. In this paper, we propose a technique to control spatial modality using feedback from the object detection network and create a mixed-modality image by eliminating the redundancy between visual and infrared information. Mixed-modality image enables object tracking with a single deep neural network as opposed to the decision- level fusion with two separate networks for visual image and infrared image. Proposed approach achieves at least 8% better object tracking accuracy than decision-level fusion while operating at 2X frame-rate and consuming 50% less energy.
机译:全天候监控需要独立于照明条件的强大的对象检测和跟踪。在功能级别或决策级别融合来自视觉红外对象检测网络对的信息显示出令人鼓舞的准确性。但是,这种融合物体检测网络不适用于处理能力和存储空间有限的边缘设备。在本文中,我们提出了一种使用来自对象检测网络的反馈来控制空间模态的技术,并通过消除视觉和红外信息之间的冗余来创建混合模态图像。混合模态图像可以使用单个深度神经网络进行对象跟踪,而与决策层融合的视觉和红外图像两个独立网络相反。与决策级融合相比,提出的方法实现了至少8%的对象跟踪精度,同时以2倍的帧速率运行,并消耗了50%的能源。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号