首页> 外文会议>European conference on computer vision >Deep Decoupling of Defocus and Motion Blur for Dynamic Segmentation
【24h】

Deep Decoupling of Defocus and Motion Blur for Dynamic Segmentation

机译:散焦的深度去耦和动态分割的运动模糊

获取原文

摘要

We address the challenging problem of segmenting dynamic objects given a single space-variantly blurred image of a 3D scene captured using a hand-held camera. The blur induced at a particular pixel on a moving object is due to the combined effects of camera motion, the object's own independent motion during exposure, its relative depth in the scene, and defocusing due to lens settings. We develop a deep convolutional neural network (CNN) to predict the probabilistic distribution of the composite kernel which is the convolution of motion blur and defocus kernels at each pixel. Based on the defocus component, we segment the image into different depth layers. We then judiciously exploit the motion component present in the composite kernels to automatically segment dynamic objects at each depth layer. Jointly handling defocus and motion blur enables us to resolve depth-motion ambiguity which has been a major limitation of the existing segmentation algorithms. Experimental evaluations on synthetic and real data reveal that our method significantly outperforms contemporary techniques.
机译:我们解决了使用手持式相机捕获的3D场景的单个空间变异模糊图像的分割动态对象的具有挑战性问题。在移动物体上以特定像素感应的模糊是由于相机运动的组合效果,对象在曝光期间的独立运动,其场景中的相对深度以及由于镜头设置而散焦。我们开发了深度卷积神经网络(CNN),以预测复合核的概率分布,这是每个像素处的运动模糊和散焦核的卷积。基于Defocus组件,我们将图像分段为不同的深度层。然后,我们明智地利用了复合内核中存在的运动组件,以在每个深度层上自动分段动态对象。共同处理散焦和运动模糊使我们能够解决深度运动模糊,这是现有分段算法的一个主要限制。合成和实数据的实验评估表明,我们的方法显着优于现代技术。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号