首页> 外文会议>Asian conference on computer vision >Fast and Differentiable Message Passing on Pairwise Markov Random Fields
【24h】

Fast and Differentiable Message Passing on Pairwise Markov Random Fields

机译:快速且可差的消息通过成对Markov随机字段

获取原文

摘要

Despite the availability of many Markov Random Field (MRF) optimization algorithms, their widespread usage is currently limited due to imperfect MRF modelling arising from hand-crafted model parameters and the selection of inferior inference algorithm. In addition to differentiability, the two main aspects that enable learning these model parameters are the forward and backward propagation time of the MRF optimization algorithm and its inference capabilities. In this work, we introduce two fast and differentiable message passing algorithms, namely, Iterative Semi-Global Matching Revised (ISGMR) and Parallel Tree-Reweighted Message Passing (TRWP) which are greatly sped up on a GPU by exploiting massive parallelism. Specifically, ISGMR is an iterative and revised version of the standard SGM for general pairwise MRFs with improved optimization effectiveness, and TRWP is a highly parallel version of Sequential TRW (TRWS) for faster optimization. Our experiments on the standard stereo and denoising benchmarks demonstrated that ISGMR and TRWP achieve much lower energies than SGM and Mean-Field (MF), and TRWP is two orders of magnitude faster than TRWS without losing effectiveness in optimization. We further demonstrated the effectiveness of our algorithms on end-to-end learning for semantic segmentation. Notably, our CUDA implementations are at least 7 and 700 times faster than PyTorch GPU implementations for forward and backward propagation respectively, enabling efficient end-to-end learning with message passing.
机译:尽管有许多Markov随机字段(MRF)优化算法,但它们的广泛使用目前由于手工制作模型参数产生的不完美MRF建模和劣等推理算法而受到限制。除了有分鉴之外,启用这些模型参数的两个主要方面是MRF优化算法的前向和向后传播时间及其推理能力。在这项工作中,我们介绍了两个快速且可分辨性的消息传递算法,即迭代半全局匹配修订(ISGMR)和并行树重新重复消息传递(TRWP),通过利用大规模的并行性大大加速在GPU上。具体而言,ISGMR是一种迭代和修订的标准SGM的标准SGM,用于一般成对MRFS,具有改进的优化效果,并且TRWP是顺序TRW(TRW)的高度平行版本,以更快优化。我们对标准立体声和去噪基准测试的实验证明ISGMR和TRWP比SGM和平均字段(MF)实现了更低的能量,并且TRWP比TRWS更快的两个数量级,而不会在优化方面失去有效​​性。我们进一步展示了我们对语义分割的端到端学习的算法的有效性。值得注意的是,我们的CUDA实现至少比Pytorch GPU实现速度至少为7%,分别用于向前和向后传播的实现,从而实现具有消息传递的高效端到端学习。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号