首页> 外文会议>European Conference on Computer Vision >Visual Data Fusion for Objects Localization by Active Vision
【24h】

Visual Data Fusion for Objects Localization by Active Vision

机译:主动视觉对象定位的视觉数据融合

获取原文

摘要

Visual sensors provide exclusively uncertain and partial knowledge of a scene. In this article, we present a suitable scene knowledge representation that makes integration and fusion of new, uncertain and partial sensor measures possible. It is based on a mixture of stochastic and set membership models. We consider that, for a large class of applications, an approximated representation is sufficient to build a preliminary map of the scene. Our approximation mainly results in ellipsoidal calculus by means of a normal assumption for stochastic laws and ellipsoidal over or inner bounding for uniform laws. These approximations allow us to build an efficient estimation process integrating visual data on line. Based on this estimation scheme, optimal exploratory motions of the camera can be automatically determined. Real time experimental results validating our approach are finally given.
机译:视觉传感器提供了一个场景的不确定和部分知识。在本文中,我们提出了一个合适的场景知识表示,可以使新的,不确定和部分传感器测量的集成和融合成为可能。它基于随机和设定隶属模型的混合物。我们认为,对于大类应用程序,近似表示足以构建场景的初步映射。我们的近似主要通过对随机法律和椭圆形的正常假设或均匀定律的内部界限来产生椭圆微积分。这些近似允许我们构建一个有效的估计过程,整合在线上的视觉数据。基于该估计方案,可以自动确定相机的最佳探索运动。终于给出了验证我们方法的实时实验结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号