首页> 外文OA文献 >Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment
【2h】

Double Deep Q-Learning and Faster R-CNN-Based Autonomous Vehicle Navigation and Obstacle Avoidance in Dynamic Environment

机译:双层Q-Learning和更快的R-CNN自主车辆导航和动态环境中的避难

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Autonomous vehicle navigation in an unknown dynamic environment is crucial for both supervised- and Reinforcement Learning-based autonomous maneuvering. The cooperative fusion of these two learning approaches has the potential to be an effective mechanism to tackle indefinite environmental dynamics. Most of the state-of-the-art autonomous vehicle navigation systems are trained on a specific mapped model with familiar environmental dynamics. However, this research focuses on the cooperative fusion of supervised and Reinforcement Learning technologies for autonomous navigation of land vehicles in a dynamic and unknown environment. The Faster R-CNN, a supervised learning approach, identifies the ambient environmental obstacles for untroubled maneuver of the autonomous vehicle. Whereas, the training policies of Double Deep Q-Learning, a Reinforcement Learning approach, enable the autonomous agent to learn effective navigation decisions form the dynamic environment. The proposed model is primarily tested in a gaming environment similar to the real-world. It exhibits the overall efficiency and effectiveness in the maneuver of autonomous land vehicles.
机译:在未知的动态环境中的自主车辆导航对于受监管和加强学习的自主机动至关重要。这两种学习方法的合作融合有可能成为解决无限环境动态的有效机制。大多数最先进的自主车辆导航系统在具有熟悉的环境动态的特定映射模型上培训。然而,本研究侧重于动态和未知环境中陆地汽车自主航行的监督和加固技术的合作融合。 R-CNN是一种监督的学习方法,识别自动车辆的无污染机构的环境环境障碍。鉴于双重Q-Learning的培训政策,加强学习方法,使自主代理能够学习有效的导航决策,形成动态环境。所提出的模型主要在类似于真实世界的游戏环境中进行测试。它在自动陆地车辆机动方面表现出整体效率和有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号