首页> 外文期刊>Circuits and Systems for Video Technology, IEEE Transactions on >Real-Time Stereo Matching on CUDA Using an Iterative Refinement Method for Adaptive Support-Weight Correspondences
【24h】

Real-Time Stereo Matching on CUDA Using an Iterative Refinement Method for Adaptive Support-Weight Correspondences

机译:自适应支持权重对应的迭代细化方法在CUDA上进行实时立体匹配

获取原文
获取原文并翻译 | 示例

摘要

High-quality real-time stereo matching has the potential to enable various computer vision applications including semi-automated robotic surgery, teleimmersion, and 3-D video surveillance. A novel real-time stereo matching method is presented that uses a two-pass approximation of adaptive support-weight aggregation, and a low-complexity iterative disparity refinement technique. Through an evaluation of computationally efficient approaches to adaptive support-weight cost aggregation, it is shown that the two-pass method produces an accurate approximation of the support weights while greatly reducing the complexity of aggregation. The refinement technique, constructed using a probabilistic framework, incorporates an additive term into matching cost minimization and facilitates iterative processing to improve the accuracy of the disparity map. This method has been implemented on massively parallel high-performance graphics hardware using the Compute Unified Device Architecture computing engine. Results show that the proposed method is the most accurate among all of the real-time stereo matching methods listed on the Middlebury stereo benchmark.
机译:高质量的实时立体声匹配具有实现各种计算机视觉应用程序的潜力,包括半自动机器人手术,远程浸入和3-D视频监视。提出了一种新颖的实时立体声匹配方法,该方法使用自适应支持权重聚合的两遍近似和低复杂度迭代视差细化技术。通过对自适应支持权重成本聚合的计算有效方法的评估,表明了两遍方法可以精确地近似支持权重,同时大大降低了聚合的复杂性。使用概率框架构造的细化技术将相加项合并到匹配成本最小化中,并有助于迭代处理以提高视差图的准确性。该方法已使用Compute Unified Device Architecture计算引擎在大规模并行高性能图形硬件上实现。结果表明,该方法是在Middlebury立体声基准测试中列出的所有实时立体声匹配方法中最准确的方法。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号