【24h】

Learning Spatial Event Models from Multiple-Camera Perspectives

机译:从多摄像机视角学习空间事件模型

获取原文
获取原文并翻译 | 示例

摘要

Intelligent environments promise to drastically change our everyday lives by connecting computation to the ordinary, human-level events happening in the real world. This paper describes a new model for tracking people in an intelligent room through a multi-camera vision system that learns to combine event predictions from multiple video streams. The system is intended to locate and track people in the room, determine their postures, and obtain images of their faces and upper bodies suitable for use during teleconferencing. This paper describes the design and architecture of the vision system and its use in Hal, our most recently constructed intelligent room.
机译:智能环境通过将计算与现实世界中发生的普通人为事件联系起来,有望彻底改变我们的日常生活。本文介绍了一种通过多摄像机视觉系统跟踪智能房间中人员的新模型,该系统学会了结合多个视频流中的事件预测。该系统旨在对房间中的人进行定位和跟踪,确定他们的姿势,并获取适合在电话会议期间使用的面部和上身的图像。本文介绍了视觉系统的设计和体系结构及其在我们最近建造的智能房间Hal中的使用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号