首页> 外文会议>Conference on uncertainty in artificial intelligence >Learning Sparse Causal Models is not NP-hard
【24h】

Learning Sparse Causal Models is not NP-hard

机译:学习稀疏因果模型并不难

获取原文
获取外文期刊封面目录资料

摘要

This paper shows that causal model discovery is not an NP-hard problem, in the sense that for sparse graphs bounded by node degree k the sound and complete causal model can be obtained in worst case order N~(2(k+2)) independence tests, even when latent variables and selection bias may be present. We present a modification of the well-known FCI algorithm that implements the method for an independence oracle, and suggest improvements for sample/real-world data versions. It does not contradict any known hardness results, and does not solve an NP-hard problem: it just proves that sparse causal discovery is perhaps more complicated, but not as hard as learning minimal Bayesian networks.
机译:本文表明因果模型发现不是一个NP难题,因为对于节点度为k的稀疏图,可以在最坏情况下获得N〜(2(k + 2))的声音和完整的因果模型。独立测试,即使可能存在潜在变量和选择偏见。我们提出了对众所周知的FCI算法的一种修改,该算法实现了针对独立预言的方法,并提出了对样本/现实世界数据版本的改进建议。它与任何已知的硬度结果都没有矛盾,也没有解决NP-hard问题:它只是证明了稀疏因果发现可能更复杂,但不像学习最小的贝叶斯网络那样困难。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号