【24h】

Multi-modal Event Streams for Virtual Reality

机译:虚拟现实的多模式事件流

获取原文
获取原文并翻译 | 示例

摘要

Applications in the fields of virtual and augmented reality as well as image-guided medical applications make use of a wide variety of hardware devices. Existing frameworks for interconnecting low-level devices and high-level application programs do not exploit the full potential for processing events coming from arbitrary sources and are not easily gener-alizable. In this paper, we will introduce a new multi-modal event processing methodology using dynamically-typed event attributes for event passing between multiple devices and systems. The existing OpenTracker framework was modified to incorporate a highly flexible and extensible event model, which can store data that is dynamically created and arbitrarily typed at runtime. The main factors impacting the library's throughput were determined and the performance was shown to be sufficient for most typical applications. Several sample applications were developed to take advantage of the new dynamic event model provided by the library, thereby demonstrating its flexibility and expressive power.
机译:虚拟现实和增强现实领域中的应用程序以及图像引导的医疗应用程序都使用各种硬件设备。现有的用于将低级设备和高级应用程序互连的框架无法充分利用处理来自任意来源的事件的全部潜能,并且不易于通用化。在本文中,我们将介绍一种新的多模式事件处理方法,该方法使用动态类型的事件属性在多个设备和系统之间传递事件。现有的OpenTracker框架经过了修改,以合并高度灵活且可扩展的事件模型,该模型可以存储在运行时动态创建和任意键入的数据。确定了影响磁带库吞吐量的主要因素,并证明该性能足以满足大多数典型应用的需求。开发了一些示例应用程序以利用该库提供的新动态事件模型,从而证明其灵活性和表达能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号