首页> 外文会议>Meeting on Inconsistency Tolerance; 2003; Dagstuhl(DE) >Approaches to Measuring Inconsistent Information
【24h】

Approaches to Measuring Inconsistent Information

机译:度量不一致信息的方法

获取原文
获取原文并翻译 | 示例

摘要

Measures of quantity of information have been studied extensively for more than fifty years. The seminal work on information theory is by Shannon. This work, based on probability theory, can be used in a logical setting when the worlds are the possible events. This work is also the basis of Lozinskii's work for defining the quantity of information of a formula (or knowledgebase) in propositional logic. But this definition is not suitable when the knowledgebase is inconsistent. In this case, it has no classical model, so we have no "event" to count. This is a shortcoming since in practical applications (e.g. databases) it often happens that the knowledgebase is not consistent. And it is definitely not true that all inconsistent knowledgebases contain the same (null) amount of information, as given by the "classical information theory". As explored for several years in the paraconsistent logic community, two inconsistent knowledgebases can lead to very different conclusions, showing that they do not convey the same information. There has been some recent interest in this issue, with some interesting proposals. Though a general approach for information theory in (possibly inconsistent) logical knowledgebases is missing. Another related measure is the measure of contradiction. It is usual in classical logic to use a binary measure of contradiction: a knowledgebase is either consistent or inconsistent. This dichotomy is obvious when the only deductive tool is classical inference, since inconsistent knowledgebases are of no use. But there are now a number of logics developed to draw non-trivial conclusions from an inconsistent knowledgebase. So this dichotomy is not sufficient to describe the amount of contradiction of a knowledgebase, one needs more fine-grained measures. Some interesting proposals have been made for this. The main aim of this paper is to review the measures of information and contradiction, and to study some potential practical applications. This has significant potential in developing intelligent systems that can be tolerant to inconsistencies when reasoning with real-world knowledge.
机译:信息量的度量已被广泛研究了五十多年。信息论的开创性工作是香农撰写的。当世界是可能的事件时,基于概率论的这项工作可以用于逻辑设置。这项工作也是洛津斯基(Lozinskii)在命题逻辑中定义公式(或知识库)信息量的基础。但是,当知识库不一致时,此定义不适用。在这种情况下,它没有经典模型,因此我们没有要计算的“事件”。这是一个缺点,因为在实际应用中(例如数据库),经常会发生知识库不一致的情况。绝对不正确的是,所有不一致的知识库都包含相同的(空)信息量,如“经典信息论”所给出的那样。正如在超一致性逻辑社区中探索的几年一样,两个不一致的知识库可能导致非常不同的结论,表明它们不能传达相同的信息。最近对此问题引起了一些兴趣,并提出了一些有趣的建议。尽管缺少(可能是不一致的)逻辑知识库中的信息理论通用方法。另一个相关的度量是矛盾度量。在古典逻辑中,通常使用矛盾的二进制量度:知识库是一致的还是不一致的。当唯一的演绎工具是经典推理时,这种二分法很明显,因为不一致的知识库是没有用的。但是,现在已经开发出许多逻辑来从不一致的知识库中得出不平凡的结论。因此,这种二分法不足以描述知识库的矛盾程度,需要采取更精细的措施。为此提出了一些有趣的建议。本文的主要目的是回顾信息和矛盾的措施,并研究一些潜在的实际应用。这在开发可以容忍现实知识推理中的不一致的智能系统方面具有巨大潜力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号