首页> 外文会议>International Conference on Control, Decision and Information Technologies >Automatic Drone Navigation in Realistic 3D Landscapes using Deep Reinforcement Learning
【24h】

Automatic Drone Navigation in Realistic 3D Landscapes using Deep Reinforcement Learning

机译:使用深度强化学习在逼真的3D风景中自动进行无人机导航

获取原文

摘要

We present a study where a drone navigates through diverse 3D obstacles by finding a 3D path and reaches the goal using deep reinforcement learning (RL) in a 3D realistic landscape. The drone has two inputs: first RGB provides a first person view of the landscape and secondly depth map gives it 3D information of the environment. For training the drone for automatic navigation, deep reinforcement learning is extensively used. For the same task, human pilot navigates through the obstacles with a radio controller (RC) using a hardware-in-the-loop setup. Racing performance between human and several deep RL algorithms such as Deep Q-Network (DQN), Double DQN, Dueling DQN and Double Dueling DQN (DD-DQN) are evaluated. Results suggest that DD-DQN outperforms other algorithms and, for the racing between humans and algorithms, DD-DQN performs better than a novice and yet an expert or intermediate-level pilot outperforms any other algorithms. The present study demonstrates that time and resource for training a drone can be saved using a realistic and yet controllable platform.
机译:我们提出了一项研究,其中无人驾驶飞机通过找到3D路径来穿越各种3D障碍物,并在3D现实环境中使用深度强化学习(RL)达到目标。无人机有两个输入:第一个RGB提供景观的第一人称视角,第二个深度图为其提供环境的3D信息。为了训练无人机进行自动导航,广泛使用了深度强化学习。对于相同的任务,飞行员可以使用硬件在环设置,通过无线电控制器(RC)穿越障碍物。评估了人类和几种深度RL算法(例如深度Q网络(DQN),双DQN,决斗DQN和双决斗DQN(DD-DQN))之间的赛车性能。结果表明,DD-DQN的性能优于其他算法,并且在人与算法之间的竞争中,DD-DQN的性能优于新手,而专家级或中级飞行员的性能却优于其他任何算法。本研究表明,使用现实且可控的平台可以节省培训无人机的时间和资源。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号