...
首页> 外文期刊>IEEE Transactions on Intelligent Transportation Systems >Fast Depth Prediction and Obstacle Avoidance on a Monocular Drone Using Probabilistic Convolutional Neural Network
【24h】

Fast Depth Prediction and Obstacle Avoidance on a Monocular Drone Using Probabilistic Convolutional Neural Network

机译:使用概率卷积神经网络在单眼无人机上的快速深度预测和避免

获取原文
获取原文并翻译 | 示例
           

摘要

Recent studies employ advanced deep convolutional neural networks (CNNs) for monocular depth perception, which can hardly run efficiently on small drones that rely on low/middle-grade GPU(e.g. TX2 and 1050Ti) for computation. In addition, the methods which can effectively and efficiently produce probabilistic depth prediction with a measure of model confidence have not been well studied. The lack of such a method could yield erroneous, sometimes fatal, decisions in drone applications (e.g. selecting a waypoint in a region with a large depth yet a low estimation confidence). This paper presents a real-time onboard approach for monocular depth prediction and obstacle avoidance with a lightweight probabilistic CNN (pCNN), which will be ideal for use in a lightweight energy-efficient drone. For each video frame, our pCNN can efficiently predict its depth map and the corresponding confidence. The accuracy of our lightweight pCNN is greatly boosted by integrating sparse depth estimation from a visual odometry into the network for guiding dense depth and confidence inference. The estimated depth map is transformed into Ego Dynamic Space (EDS) by embedding both dynamic motion constraints of a drone and the confidence values into the spatial depth map. Traversable waypoints are automatically computed in EDS based on which appropriate control inputs for the drone are produced. Extensive experimental results on public datasets demonstrate that our depth prediction method runs at 12Hz and 45Hz on TX2 and 1050Ti GPU respectively, which is 1.8X similar to 5.6X faster than the state-of-the-art methods and achieves better depth estimation accuracy. We also conducted experiments of obstacle avoidance in both simulated and real environments to demonstrate the superiority of our method to the baseline methods.
机译:最近的研究采用了用于单眼深度感知的先进的深度卷积神经网络(CNNS),这可以对依赖低/中级GPU(例如TX2和1050TI)的小型无人机有效地有效地运行。此外,还没有很好地研究了能够有效和有效地产生验证模型信心的概率深度预测的方法。缺乏这种方法可以产生错误的,有时致命的,有时致命的,在无人机应用中的决定(例如,在具有大深度较低估计置信度的区域中选择航点)。本文介绍了一种具有轻质概率CNN(PCNN)的单眼深度预测和障碍物避免的实时载体方法,这将是在轻质节能无人机中使用的理想选择。对于每个视频帧,我们的PCNN可以有效地预测其深度图和相应的置信度。通过将稀疏深度估计与视觉测量法集成到网络中的稀疏深度估计中,大大提高了我们的轻量级PCNN的精度,以引导密集深度和置信。通过将无人机的动态运动约束和置信度值嵌入到空间深度图中,将估计的深度图转换为自我动态空间(EDS)。在EDS中自动计算遍历的航点,基于哪些适当的无人机控制输入。公共数据集的广泛实验结果表明,我们的深度预测方法分别在TX2和1050TI GPU上以12Hz和45Hz运行,比最先进的方法更快地与5.6倍类似的1.8倍,并实现了更好的深度估计精度。我们还在模拟和真实环境中进行了避免障碍物的实验,以证明我们对基线方法的方法的优越性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号