首页> 外文期刊>JMLR: Workshop and Conference Proceedings >On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption
【24h】

On Learning Causal Structures from Non-Experimental Data without Any Faithfulness Assumption

机译:在没有任何忠实假设的情况下从非实验数据学习因果结构

获取原文
           

摘要

Consider the problem of learning, from non-experimental data, the causal (Markov equivalence) structure of the true, unknown causal Bayesian network (CBN) on a given, fixed set of (categorical) variables. This learning problem is known to be very hard, so much so that there is no learning algorithm that converges to the truth for all possible CBNs (on the given set of variables). So the convergence property has to be sacrificed for some CBNs—but for which? In response, the standard practice has been to design and employ learning algorithms that secure the convergence property for at least all the CBNs that satisfy the famous faithfulness condition, which implies sacrificing the convergence property for some CBNs that violate the faithfulness condition (Spirtes, Glymour, and Scheines, 2000). This standard design practice can be justified by assuming—that is, accepting on faith—that the true, unknown CBN satisfies the faithfulness condition. But the real question is this: Is it possible to explain, without assuming the faithfulness condition or any of its weaker variants, why it is mandatory rather than optional to follow the standard design practice? This paper aims to answer the above question in the affirmative. We first define an array of modes of convergence to the truth as desiderata that might or might not be achieved by a causal learning algorithm. Those modes of convergence concern (i) how pervasive the domain of convergence is on the space of all possible CBNs and (ii) how uniformly the convergence happens. Then we prove a result to the following effect: for any learning algorithm that tackles the causal learning problem in question, if it achieves the best achievable mode of convergence (considered in this paper), then it must follow the standard design practice of converging to the truth for at least all CBNs that satisfy the faithfulness condition—it is a requirement, not an option.
机译:考虑从非实验数据中学习在给定的一组(分类)变量上真实的,未知的因果贝叶斯网络(CBN)的因果(马尔科夫等效)结构的问题。众所周知,这个学习问题非常艰巨,以至于没有针对所有可能的CBN(在给定的变量集上)收敛到真理的学习算法。因此,对于某些CBN,必须牺牲其收敛性,但是对于哪些呢?对此,标准做法是设计和采用学习算法,以确保至少满足著名的忠实条件的所有CBN的收敛性,这意味着牺牲某些违反忠实条件的CBN的收敛性(Spirtes,Glymour ,and Scheines,2000)。通过假设(即接受信仰)真实,未知的CBN满足忠诚条件,可以证明这种标准的设计实践是合理的。但是,真正的问题是:在不假设忠诚条件或其任何较弱变体的情况下,是否有可能解释为什么遵循标准设计惯例是强制性而非可选性?本文旨在肯定地回答上述问题。首先,我们定义了因果学习算法可能实现或未实现的对真理收敛的一系列模式,如desiderata。这些收敛模式涉及(i)收敛域在所有可能的CBN的空间上的普及程度,以及(ii)收敛发生的均匀性。然后,我们证明了以下结果的结果:对于解决有问题的因果学习问题的任何学习算法,如果它实现了最佳可实现的收敛模式(本文考虑了这种方法),那么它必须遵循收敛到至少对于所有满足忠诚条件的CBN来说,这都是事实-这是必要条件,而非选择。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号