首页> 中文期刊>现代电子技术 >一种'客观度量'和'深度学习'共同驱动的立体匹配方法

一种'客观度量'和'深度学习'共同驱动的立体匹配方法

     

摘要

提出一种基于"客观度量"和"深度学习"共同驱动的立体匹配方法,互补"度量"和"学习"特征,提升立体匹配视差图的精度.将基于灰度差绝对和(SAD)与灰度梯度差绝对和(GRAD)两类算子的客观计算特征和基于数据驱动的深度学习特征进行加权融合,构建匹配代价模型;采用引导滤波器对匹配代价进行聚合;通过胜者全赢算法得到初始视差图;最后,运用左右一致性校验和加权中值滤波器优化视差图,去除误匹配点,得到最优视差图.在Middlebury立体匹配评估平台上的测试实验表明,所提算法能有效降低视差图平均绝对误差和均方根误差.%A novel stereo matching approach based on objective measurement and deep learning is proposed to complement the features of measurement and learning,and improve the accuracy of stereo matching parallax map. The weight fusion is per-formed for the objective computing feature based on the sum of absolute intensity differences(SAD)and sum of grayscale gradi-ent absolute differences(GRAD),and the deep learning feature based on data driver to construct the matching cost model. The guiding filter is used to aggregate the matching costs. The initial parallax map is obtained by means of the winner-take-all (WTA)algorithm.The left-right consistency check and weighted median filter are adopted to optimize the parallax map,remove the mismatching points,and get the optimal parallax map. The stereo matching approach was tested on Middlebury stereo matching evaluation platform. The experimental results demonstrate that the proposed approach can reduce the average absolute error and root-mean-square error of the parallax map greatly.

著录项

相似文献

  • 中文文献
  • 外文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号