首页> 外文会议>ACM/IEEE International Conference on Distributed Smart Cameras >LINEAR DYNAMIC DATA FUSION TECHNIQUES FOR FACE ORIENTATION ESTIMATION IN SMART CAMERA NETWORKS
【24h】

LINEAR DYNAMIC DATA FUSION TECHNIQUES FOR FACE ORIENTATION ESTIMATION IN SMART CAMERA NETWORKS

机译:智能摄像机网络面向面向估计线性动态数据融合技术

获取原文

摘要

Face orientation estimation problems arise in applications of camera networks such as human-computer interface (HCI), and person recognition and tracking. In this paper, we propose and compare two collaborative face orientation estimation techniques in smart camera networks based on fusion of coarse local estimates in a joint estimation model at network level. The techniques employ low-complexity methods for in-node face orientation and angular motion estimation to accommodate computational limitations of smart camera nodes. The local estimates are hence assumed coarse and prone to errors. In the joint refined estimation phase, the problem is modeled as a discrete-time linear dynamical system, and Linear Quadratic Regulation (LQR) and Kalman Filtering (KF) methods are applied. In the LQR-based analysis, the spatiotemporal consistency between cameras is measured by a cost function, which is composed as a weighted quadratic sum of spatial inconsistency, input energy, and in-node estimation error. Minimizing the cost function through LQR provides a robust closed-loop feedback system that successfully estimates the face orientation, angular motion, and relative angular differences to the face between cameras. In the KF-based analysis, the confidence level of each local estimate is used as a weight in the measurement update. This model can be further extended to missing data cases where not all local estimates are collected in the network, hence offering flexibility in communication scheduling between the nodes. The proposed technique does not require camera locations to be known a priori, and hence is applicable to vision networks deployed casually without localization.
机译:面向方向估计在人机网络(HCI)等相机网络的应用中出现问题,以及人员识别和跟踪。在本文中,我们提出了基于网络级联合估计模型中的粗局估计的融合来比较智能摄像机网络中的两个协作面向面向估计技术。该技术采用用于节点面向面向方向和角运动估计的低复杂性方法,以适应智能相机节点的计算限制。因此,本地估计是假设粗糙和易受错误。在联合精制估计阶段,问题被建模为一个离散时间线性动力系统,并应用线性二次调节(LQR)和卡尔曼滤波(KF)方法。在基于LQR的分析中,通过成本函数测量相机之间的时空普通率,该成本函数被组成为加权二次空间不一致,输入能量和节点估计误差。通过LQR最小化成本函数提供了一种稳健的闭环反馈系统,其成功地估计了对相机之间的面部面的面向方向,角运动和相对角度差异。在基于KF的分析中,每个本地估计的置信水平用作测量更新中的重量。该模型可以进一步扩展到缺少数据情况,其中不是在网络中收集的所有本地估计,因此在节点之间提供通信调度的灵活性。所提出的技术不需要认证的相机位置,因此适用于随机部署的视觉网络而不定位。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号