首页> 外文会议>IEEE Winter Conference on Applications of Computer Vision >FlowNet3D++: Geometric Losses For Deep Scene Flow Estimation
【24h】

FlowNet3D++: Geometric Losses For Deep Scene Flow Estimation

机译:FlowNet3D ++:深度场景流估计的几何损失

获取原文

摘要

We present FlowNet3D++, a deep scene flow estimation network. Inspired by classical methods, FlowNet3D++ incorporates geometric constraints in the form of point-toplane distance and angular alignment between individual vectors in the flow field, into FlowNet3D [21]. We demonstrate that the addition of these geometric loss terms improves the previous state-of-art FlowNet3D accuracy from 57.85% to 63.43%. To further demonstrate the effectiveness of our geometric constraints, we propose a benchmark for flow estimation on the task of dynamic 3D reconstruction, thus providing a more holistic and practical measure of performance than the breakdown of individual metrics previously used to evaluate scene flow. This is made possible through the contribution of a novel pipeline to integrate point-based scene flow predictions into a global dense volume. FlowNet3D++ achieves up to a 15.0% reduction in reconstruction error over FlowNet3D, and up to a 35.2% improvement over KillingFusion [32] alone. We will release our scene flow estimation code later.
机译:我们介绍了FlowNet3D ++,这是一种深度场景流量估计网络。受经典方法的启发,FlowNet3D ++将点到平面距离和流场中各个矢量之间的角度对齐形式的几何约束合并到FlowNet3D [21]中。我们证明,添加这些几何损耗项可以将以前的最新FlowNet3D精度从57.85%提高到63.43%。为了进一步证明我们的几何约束的有效性,我们为动态3D重建任务提出了流量估算的基准,从而提供了比以前用于评估场景流量的单个指标更全面,更实际的性能指标。通过新颖的管道将基于点的场景流预测集成到全局密集体积中,可以实现这一点。与FlowNet3D相比,FlowNet3D ++的重构错误最多可减少15.0%,与单独使用KillingFusion [32]相比,最多可减少35.2%。稍后我们将发布场景流估计代码。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号