...
首页> 外文期刊>Synthese >A Logic For Inductive Probabilistic Reasoning
【24h】

A Logic For Inductive Probabilistic Reasoning

机译:归纳概率推理的逻辑

获取原文
           

摘要

Inductive probabilistic reasoning is understood as the application of inference patterns that use statistical background information to assign (subjective) probabilities to single events. The simplest such inference pattern is direct inference: from “70% of As are Bs” and “a is an A” infer that a is a B with probability 0.7. Direct inference is generalized by Jeffrey’s rule and the principle of cross-entropy minimization. To adequately formalize inductive probabilistic reasoning is an interesting topic for artificial intelligence, as an autonomous system acting in a complex environment may have to base its actions on a probabilistic model of its environment, and the probabilities needed to form this model can often be obtained by combining statistical background information with particular observations made, i.e., by inductive probabilistic reasoning. In this paper a formal framework for inductive probabilistic reasoning is developed: syntactically it consists of an extension of the language of first-order predicate logic that allows to express statements about both statistical and subjective probabilities. Semantics for this representation language are developed that give rise to two distinct entailment relations: a relation ⊨ that models strict, probabilistically valid, inferences, and a relation that models inductive probabilistic inferences. The inductive entailment relation is obtained by implementing cross-entropy minimization in a preferred model semantics. A main objective of our approach is to ensure that for both entailment relations complete proof systems exist. This is achieved by allowing probability distributions in our semantic models that use non-standard probability values. A number of results are presented that show that in several important aspects the resulting logic behaves just like a logic based on real-valued probabilities alone.
机译:归纳概率推理被理解为推理模式的应用,该推理模式使用统计背景信息为单个事件分配(主观)概率。最简单的这种推断模式是直接推断:从“ As的70%是Bs”和“ a是A”推断出a是B的概率为0.7。 Jeffrey的规则和交叉熵最小化的原理概括了直接推论。充分归纳归纳概率推理是人工智能的一个有趣主题,因为在复杂环境中运行的自治系统可能必须基于其环境的概率模型来执行其动作,而形成该模型所需的概率通常可以通过以下方式获得:将统计背景信息与特定观察相结合,即通过归纳概率推理。在本文中,开发了一个用于归纳概率推理的正式框架:在​​语法上,它包括一阶谓词逻辑语言的扩展,该语言可以表达有关统计概率和主观概率的陈述。开发了这种表示语言的语义,它产生了两个截然不同的蕴含关系:一个模型strict建模严格,概率有效的推论,以及一个模型化归纳概率推论。归纳蕴含关系是通过在首选模型语义中实现交叉熵最小化而获得的。我们方法的主要目标是确保对于两个蕴含关系都存在完整的证明系统。这是通过在使用非标准概率值的语义模型中允许概率分布来实现的。给出了许多结果,这些结果表明,在几个重要方面中,所得逻辑的行为就像仅基于实值概率的逻辑一样。

著录项

  • 来源
    《Synthese》 |2005年第2期|181-248|共68页
  • 作者

    Manfred Jaeger;

  • 作者单位

    Department for Computer Science Aalborg University;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号