【24h】

Sensor fusion for interactive real-scale modeling and simulation systems

机译:交互式真实比例建模和仿真系统的传感器融合

获取原文

摘要

This paper proposes an accurate sensor fusion scheme for navigation inside a real-scale 3D model by combining audio and video signals. Audio signal of a microphone-array is merged by Minimum Variance Distortion-less Response (MVDR) algorithm and processed instantaneously via Hidden Markov Model (HMM) to generate translation commands by word-to-action module of speech processing system. Then, the output of optical head tracker (four IR cameras) is analyzed by non-linearon-Gaussian Bayesian algorithm to provide information about the orientation of the user's head. The orientation is used to redirect the user toward a new direction by applying quaternion rotation. The output of these two sensors (video and audio) is combined under the sensor fusion scheme to perform continuous travelling inside the model. The maximum precision for the traveling task is achieved under sensor fusion scheme. Practical experiment shows promising results for the implementation.
机译:本文提出了一种通过组合音频和视频信号在真实3D模型中导航的精确传感器融合方案。麦克风阵列的音频信号通过最小方差最小失真响应(MVDR)算法合并,并通过隐马尔可夫模型(HMM)即时处理,以通过语音处理系统的单词到动作模块生成翻译命令。然后,通过非线性/非高斯贝叶斯算法分析光学头跟踪器(四个红外摄像机)的输出,以提供有关用户头部方向的信息。该方向用于通过应用四元数旋转将用户重定向到新方向。这两个传感器(视频和音频)的输出在传感器融合方案下进行组合,以在模型内部进行连续行驶。在传感器融合方案下,可以实现旅行任务的最大精度。实际实验表明该实施方案具有可喜的成果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号