首页> 外文会议>International semantic web conference >A Multi-reasoner, Justification-Based Approach to Reasoner Correctness
【24h】

A Multi-reasoner, Justification-Based Approach to Reasoner Correctness

机译:一种基于多原因,基于证据的推理正确性方法

获取原文

摘要

OWL 2 DL is a complex logic with reasoning problems that have a high worst case complexity. Modern reasoners perform mostly very well on naturally occurring ontologies of varying sizes and complexity. This performance is achieved through a suite of complex optimisations (with complex interactions) and elaborate engineering. While the formal basis of the core reasoner procedures are well understood, many optimisations are less so, and most of the engineering details (and their possible effect on reasoner correctness) are unreviewed by anyone but the reasoner developer. Thus, it is unclear how much confidence should be placed in the correctness of implemented reasoners. To date, there is no principled, correctness unit test-like suite for simple language features and, even if there were, it is unclear that passing such a suite would say much about correctness on naturally occurring ontologies. This problem is not merely theoretical: Divergence in behaviour (thus known bugginess of implementations) has been observed in the OWL Reasoner Evaluation (ORE) contests to the point where a simple, majority voting procedure has been put in place to resolve disagreements. In this paper, we present a new technique for finding and resolving reasoner disagreement. We use justifications to cross check disagreements. Some cases are resolved automatically, others need to be manually verified. We evaluate the technique on a corpus of naturally occurring ontologies and a set of popular reasoners. We successfully identify several correctness bugs across different reasoners, identify causes for most of these, and generate appropriate bug reports and patches to ontologies to work around the bug.
机译:OWL 2 DL是具有推理问题的复杂逻辑,最坏情况下的复杂性很高。现代推理机在大小和复杂度各异的自然本体上的表现都非常好。通过一系列复杂的优化(具有复杂的交互作用)和精心设计的工程来实现这一性能。尽管核心推理程序的形式基础已广为人知,但许多优化却并非如此,大多数工程细节(以及它们对推理正确性的可能影响)都不会被推理开发人员审查。因此,目前尚不清楚应该对已实现的推理机的正确性置信于多大的信心。迄今为止,还没有针对简单语言功能的有原则性的,类似于正确性的单元测试套件,即使存在,也不清楚通过这样的套件是否会说明自然存在的本体的正确性。这个问题不仅是理论上的问题:在OWL推理评估(ORE)竞赛中,人们观察到了行为上的分歧(因此,存在种种错误),从而采取了简单的多数投票程序来解决分歧。在本文中,我们提出了一种发现和解决推理者分歧的新技术。我们使用理由来核对分歧。有些情况会自动解决,而其他情况则需要手动验证。我们根据自然存在的本体和一组流行的推理程序评估该技术。我们成功地确定了不同推理程序中的几个正确性错误,找出了其中大多数的原因,并生成了适当的错误报告和针对本体的补丁程序以解决该错误。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号