首页> 外文学位 >Interactive full-body motion capture using infrared sensor network.
【24h】

Interactive full-body motion capture using infrared sensor network.

机译:使用红外传感器网络进行交互式全身运动捕捉。

获取原文
获取原文并翻译 | 示例

摘要

Traditional motion capture (mocap) has been well-studied in visual science for a long time. More and more techniques are introduced each year to improve the quality of the mocap data. However up until a few years ago the field is mostly about capturing the precise animation to be used in different application after post processing such as studying biomechanics or rigging models in movies. These data sets are normally captured in complex laboratory environments with sophisticated equipment thus making motion capture a field that is mostly exclusive to professional animators. In addition, obtrusive sensors must be attached to actors and calibrated within the capturing system, resulting in limited and unnatural motion. In recent year the rise of computer vision and interactive entertainment opened the gate for a different type of motion capture which focuses on producing marker or mechanical sensorless motion capture. Furthermore a wide array of low-cost but with primitive and limited functions device are released that are easy to use for less mission critical applications. Beside the traditional problems of markerless systems such as data synchronization, and occlusion, these devices also have other limitation such as low resolution, excessive signal noise and narrow tracking range. In this thesis I will describe a new technique of using multiple infrared devices to process data from multiple infrared sensors to enhance the flexibility and accuracy of the markerless mocap. The method involves analyzing each individual sensor data, decompose and rebuild them into a uniformed skeleton across all sensors. We then assign criteria to define the confidence level of captured signal from sensor. Each sensor operates on its own process and communicates through MPI. After each sensor provides the data to the main process, we synchronize data from all sensors into the same coordinate space. Finally we rebuild the final skeleton presentation by picking data with a combination of the most confident information. Our method emphasizes on the need of minimum calculation overhead for better real time performance while being able to maintain good scalability. These are specific contributions of this thesis: first, this technique offers a more accurate and precise mocap by making sure all the involved joints are properly tracked by at least one sensor at all time. Second, this method alleviates intrinsic shortfall of the device such as noise and occlusion. Third, it provides greater flexibility outside the geometric range limitation of one sensor which allows for greater movement freedom of an actor. And finally it does not require lengthy calibration and pre-processing procedures making this setup much more straightforward and easy to deploy in many application cases.
机译:传统的运动捕捉(运动捕捉)在视觉科学领域已经研究了很长时间。每年都会引入越来越多的技术来提高Mocap数据的质量。然而,直到几年前,该领域主要是在后期处理(例如研究生物力学或电影中的装配模型)后捕获要在不同应用中使用的精确动画。通常在复杂的实验室环境中使用复杂的设备捕获这些数据集,从而使运动捕获成为专业动画师专用的领域。另外,必须将突兀的传感器连接到演员并在捕获系统中进行校准,从而导致运动受限和不自然。近年来,计算机视觉和交互式娱乐的兴起为另一种类型的运动捕获打开了大门,后者专注于生产标记器或机械无传感器运动捕获。此外,还发布了各种低成本但功能有限的原始设备,这些设备易于用于任务较少的应用。除了诸如数据同步和遮挡之类的无标记系统的传统问题外,这些设备还具有其他局限性,例如分辨率低,信号噪声过大和跟踪范围窄。在本文中,我将介绍一种使用多个红外设备来处理来自多个红外传感器的数据以提高无标记捕蝇器的灵活性和准确性的新技术。该方法涉及分析每个单独的传感器数据,将其分解并将其重建为跨所有传感器的统一骨架。然后,我们分配标准以定义从传感器捕获的信号的置信度。每个传感器在其自己的过程上运行,并通过MPI进行通信。每个传感器将数据提供给主过程后,我们会将所有传感器的数据同步到同一坐标空间中。最终,我们通过结合最可靠的信息来选择数据来重建最终的骨架表示。我们的方法强调需要最小的计算开销来获得更好的实时性能,同时又能够保持良好的可伸缩性。这些是本论文的具体贡献:首先,该技术通过确保所有相关关节在任何时候都至少被一个传感器正确跟踪,从而提供了一种更准确,更精确的捕捉。其次,这种方法减轻了设备固有的缺陷,例如噪声和阻塞。第三,它在一个传感器的几何范围限制之外提供了更大的灵活性,从而允许演员有更大的运动自由度。最后,它不需要冗长的校准和预处理程序,从而使此设置更加简单明了,并且易于在许多应用案例中进行部署。

著录项

  • 作者

    Duong, Son Trong.;

  • 作者单位

    University of Colorado at Denver.;

  • 授予单位 University of Colorado at Denver.;
  • 学科 Computer Science.
  • 学位 M.S.
  • 年度 2012
  • 页码 62 p.
  • 总页数 62
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类 石油、天然气工业;
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号