首页> 外文会议>IEEE International Symposium on Mixed and Augmented Reality >Unified Visual Perception Model for context-aware wearable AR
【24h】

Unified Visual Perception Model for context-aware wearable AR

机译:用于上下文感知的可穿戴AR的统一视觉感知模型

获取原文

摘要

We propose Unified Visual Perception Model (UVPM), which imitates the human visual perception process, for the stable object recognition necessarily required for augmented reality (AR) in the field. The proposed model is designed based on the theoretical bases in the field of cognitive informatics, brain research and psychological science. The proposed model consists of Working Memory (WM) in charge of low-level processing (in a bottomup manner), Long-Term Memory (LTM) and Short-Term Memory (STM), which are in charge of high-level processing (in a top-down manner). WM and LTM/STM are mutually complementary to increase recognition accuracies. By implementing the initial prototype of each boxes of the model, we could know that the proposed model works for stable object recognition. The proposed model is available to support context-aware AR with the optical see-through HMD.
机译:我们提出了统一的视觉感知模型(UVPM),该模型模仿了人类的视觉感知过程,用于增强现实(AR)领域必需的稳定对象识别。该模型是基于认知信息学,脑研究和心理学领域的理论基础设计的。提议的模型由负责低级处理(以自底向上方式)的工作内存(WM),负责高级处理的长期内存(LTM)和短期内存(STM)组成(以自上而下的方式)。 WM和LTM / STM相互补充,以提高识别准确性。通过实现模型每个框的初始原型,我们可以知道所提出的模型可用于稳定的对象识别。所提出的模型可用于通过光学透明HMD支持上下文感知的AR。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号