首页> 外文会议>AAAI Workshop on Contexts and Ontologies: Theory, Practice and Applications >An Application-Oriented Context Pre-fetch Method for Improving Inference Performance in Ontology-based Context Management
【24h】

An Application-Oriented Context Pre-fetch Method for Improving Inference Performance in Ontology-based Context Management

机译:面向应用程序的上下文预取方法,用于提高基于本体的上下文管理中的推理性能

获取原文
获取外文期刊封面目录资料

摘要

Ontology-based context models are widely used in a ubiquitous computing environment. Among many benefits such as acquisition of conceptual context through inference, context sharing, and context reusing, the ontology-based context model enables context-aware applications to use conceptual contexts which cannot be acquired by sensors. However, inferencing causes processing delay and it becomes a major obstacle to context-aware applications. The delay becomes longer as the size of the contexts managed by the context management system increases. In this paper, we propose a method for reducing the size of context database to speed up the inferencing. We extend the query-tree method to determine relevant contexts required to answer specific queries from applications in static time. By introducing context types into a query-tree, the proposed scheme filters more relevant contexts out of a query-tree and inference is performed faster without loss of the benefits of ontology.
机译:基于本体的上下文模型广泛用于普遍存在的计算环境中。在许多益处(例如通过推断,上下文共享和上下文重用)获取概念上下文,基于本体的上下文模型使上下文感知的应用程序能够使用传感器无法获取的概念上下文。然而,推理导致处理延迟,并且它成为上下文感知应用程序的主要障碍。随着上下文管理系统管理的上下文的大小增加,延迟变得更长。在本文中,我们提出了一种减少上下文数据库大小来加速推理的方法。我们扩展了查询树方法以确定从静态时间中回答特定查询所需的相关上下文。通过将上下文类型引入查询树中,所提出的方案过滤从查询树中的更多相关的上下文筛选出来,在不损失本体的福利损失时执行推断。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号