...
首页> 外文期刊>International Journal of Computer Vision >Multi-sensory and multi-modal fusion for sentient computing
【24h】

Multi-sensory and multi-modal fusion for sentient computing

机译:用于感觉计算的多传感器和多模式融合

获取原文
获取原文并翻译 | 示例
           

摘要

This paper presents an approach to multi-sensory and multi-modal fusion in which computer vision information obtained from calibrated cameras is integrated with a large-scale sentient computing system known as "SPIRIT"'. The SPIRIT system employs an ultrasonic location infrastructure to track people and devices in an office building and model their state. Vision techniques include background and object appearance modelling, face detection, segmentation, and tracking modules. Integration is achieved at the system level through the metaphor of shared perceptions, in the sense that the different modalities are guided by and provide updates to a shared world model. This model incorporates aspects of both the static (e.g. positions of office walls and doors) and the dynamic (e.g. location and appearance of devices and people) environment. Fusion and inference are performed by Bayesian networks that model the probabilistic dependencies and reliabilities of different sources of information over time. It is shown that the fusion process significantly enhances the capabilities and robustness of both sensory modalities, thus enabling the system to maintain a richer and more accurate world model.
机译:本文提出了一种多传感器和多模式融合的方法,其中将从校准相机获得的计算机视觉信息与称为“ SPIRIT”的大规模感知计算系统集成在一起。 SPIRIT系统采用超声定位基础设施来跟踪办公大楼中的人员和设备并对其状态建模。视觉技术包括背景和对象外观建模,面部检测,分割和跟踪模块。集成是通过共享感知的隐喻在系统级别实现的,在某种意义上,不同的方式由共享的世界模型指导并提供更新。该模型结合了静态(例如,办公室墙壁和门的位置)和动态(例如,设备和人员的位置和外观)环境的各个方面。融合和推理是由贝叶斯网络执行的,该模型对不同信息源随时间的概率依赖性和可靠性进行建模。结果表明,融合过程显着增强了两种感觉模态的功能和鲁棒性,从而使系统能够维持更丰富,更准确的世界模型。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号