...
首页> 外文期刊>IEEE Transactions on Automatic Control >Tracking a Markov Target in a Discrete Environment With Multiple Sensors
【24h】

Tracking a Markov Target in a Discrete Environment With Multiple Sensors

机译:使用多个传感器在离散环境中跟踪马尔可夫目标

获取原文
获取原文并翻译 | 示例

摘要

In this paper, we consider using multiple noisy binary sensors to track a target that moves as a Markov Chain in a finite discrete environment, with symmetric probability of false alarm and missed detection. We study two policies. First, we show that the greedy policy, whereby m sensors are placed at the m. most-likely target locations, is one-step optimal in that it maximizes the expected maximum a posteriori (MAP) estimate. Second, we show that a policy in which the m sensors are placed in the second through (m + 1)st most likely target locations achieves equal or slightly worse expected MAP performance, but leads to significantly decreased variance on the MAP estimate. The result is proven for m = 1, and Monte Carlo simulations give evidence for m > 1. Both policies are closed loop, index-based active sensing strategies that are computationally trivial to implement. Our approach focuses on one-step optimality because of the apparent intractability of computing an optimal policy via dynamic programming in belief space. However, Monte Carlo simulations suggest that both policies perform well over arbitrary horizons.
机译:在本文中,我们考虑使用多个带噪声的二进制传感器来跟踪目标,该目标在有限的离散环境中以马尔可夫链的形式运动,具有错误警报和漏检的对称概率。我们研究两种政策。首先,我们证明了贪心策略,即在m处放置m个传感器。最可能的目标位置是一步优化,因为它可以使预期的最大后验(MAP)估计值最大化。其次,我们表明,将m个传感器放置在第二个(第m + 1个)最有可能的目标位置中的策略可以达到预期的MAP性能相同或稍差的效果,但会导致MAP估计值的方差大大减少。对于m = 1证明了结果,并且蒙特卡洛模拟为m> 1提供了证据。两种策略都是闭环的,基于索引的主动感应策略,在计算上实现起来很简单。由于通过信念空间中的动态编程来计算最优策略的明显的难处理性,因此我们的方法侧重于一步优化。但是,蒙特卡洛模拟表明,这两种策略在任意范围内都表现良好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号