首页> 外文会议>AAAI Conference on Artificial Intelligence Workshops >Toward Caching Symmetrical Subtheories for Weighted Model Counting
【24h】

Toward Caching Symmetrical Subtheories for Weighted Model Counting

机译:朝着加权模型计数缓存对称的子公厂

获取原文

摘要

Model counting and weighted model counting are key problems in artificial intelligence. Marginal inference can be reduced to model counting in many statistical-relational systems, such as Markov Logic. One common approach used by model counters is splitting a theory into disjoint subtheories, performing model counting on the subtheories, and then caching the result. If an identical subtheory is encountered again in the search, the cached result is used, greatly reducing runtime. In this work we introduce a way to cache symmetric subtheories compactly, which could potentially decrease required cache size, increase cache hits, and decrease runtime of solving.
机译:模型计数和加权模型计数是人工智能的关键问题。在许多统计关系系统中,可以减少边缘推断,例如Markov逻辑等许多统计关系系统。模型计数器使用的一种常见方法是将理论拆分为不相交的子公厂,执行在子公厂上的模型计数,然后缓存结果。如果在搜索中再次遇到相同的子系统,则使用缓存的结果,大大减少了运行时。在这项工作中,我们介绍了一种方法,可以紧凑地缓存对称的子公厂,这可能会降低所需的高速缓存大小,增加缓存命中,并减少解决的运行时间。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号