首页> 外文OA文献 >Vision Based Shipboard Recovery of Unmanned Rotorcraft
【2h】

Vision Based Shipboard Recovery of Unmanned Rotorcraft

机译:基于视觉的无人旋翼机舰载恢复

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Landing an Unmanned Aerial Vehicle (UAV) autonomously and safely on a ship's flight deck is a challenging task for robotic researchers. The difficulties include large deck motion, disturbances caused by gusts and turbulence, and decreased visibility resulting from bad weather conditions such as rain, fog and sun reflection. Existing techniques for landing area localization and pose determination during approach and landing in these conditions tend to rely on ship-deck infrastructure based sensing units or artificial landing markers. Compared to the other works, this thesis develops a more robust mean of segmenting the landing marker which is robust to occlusion and is able to use the same international landing marker used for manned helicopter operations to finalize target recognition and pose estimation, such that no additional infrastructure is required on the ship.The three major tasks developed in this thesis are locating a viable landing area on the ship, accurately determining the relative pose between the UAV and the shipdeck and finally implementation of an algorithm for deciding when to safely land based on calm period prediction. A self-contained, on-board real-time system, with vision sensors is developed, which makes use of the edge information from the international landing marker to perform line segment detection, feature point mapping and clustering. A cascade filtering scheme formed by a series of coarse-to-fine criteria is adopted to facilitate target recognition. Meanwhile, the vision system is designed to adapt to the challenges of visual occlusion and contamination, such that the landing marker can be reconstructed by processing the information of a partially missing marker. For the second task, the full 6 degree-of-freedom (DoF) relative pose is estimated based upon the extracted keypoint information as well as the prior knowledge of the landing marker by monocular vision. To increase the system redundancy, a secondary 3D range sensor is introduced to extend the operational range, especially when the landing marker is out of the field of view of the monocular camera. An optic flow (OF) based motion estimation method is also used to help stabilize the UAV. A mission planner is proposed to let the system switch between different tasks during landing according to the availability of measurements. Multiple state estimators are used to fuse measurements from different sources to obtain better estimation, so that the UAV can continue its current task even when some sensor outputs fail or are degraded. An on-board controller with the associated measurement-based switching scheme is designed to close the control loop. For the third task, the proposed system is adopted to capture the relative pose between the UAV and an imitated moving shipdeck, whose motion is simulated using a 3DoF moving platform based on different generated ship motion data sets. To prove the concept, a classic timer-series predictor is introduced to predict the ship motion in the future in a short period, with a proposed classifier to determine the opportunities for safe autonomous landing.To validate the system, the vision system is evaluated by both pre-captured and real-time imagery in the presence of challenges such as like occlusions and illumination variations. The precision of relative pose estimation is quantitatively analyzed. The integrated system is then examined and demonstrated by conducting real flight tests, whose measurements are compared against a VICON motion capture system benchmark. In order to seize the most favorable landing conditions, a study on short-term ship motion prediction is carried out based on the simulated 3DoF shipdeck motion measured by vision.
机译:对于机器人研究人员而言,自主安全地将无人飞行器(UAV)降落在船的驾驶舱上是一项艰巨的任务。困难包括大甲板运动,阵风和湍流引起的干扰以及由于恶劣的天气条件(如雨,雾和阳光反射)而导致的能见度降低。在这些情况下,在进近和着陆期间用于着陆区域定位和姿态确定的现有技术倾向于依赖基于船甲板基础设施的传感单元或人工着陆标记。与其他工作相比,本论文提出了一种更可靠的分割着陆标记的方法,该方法对遮挡具有鲁棒性,并且能够使用与载人直升机操作相同的国际着陆标记来完成目标识别和姿态估计,因此无需进行其他操作本文所开发的三项主要任务是在船上确定可行的着陆区,准确确定无人机与船甲板之间的相对姿态,并最终实现一种根据何时确定安全着陆的算法。平静期预测。开发了带有视觉传感器的独立式机载实时系统,该系统利用国际着陆标尺的边缘信息进行线段检测,特征点映射和聚类。采用由一系列从粗到精的准则形成的级联过滤方案来促进目标识别。同时,视觉系统被设计为适应视觉遮挡和污染的挑战,从而可以通过处理部分缺失的标记物的信息来重建着陆标记物。对于第二项任务,基于提取的关键点信息以及单眼视觉对着陆标记的先验知识,估计完整的6个自由度(DoF)相对姿态。为了增加系统冗余,引入了辅助3D距离传感器以扩展操作范围,尤其是在着陆标记不在单眼相机视野范围内时。基于光流(OF)的运动估计方法也用于帮助稳定无人机。提出了任务计划器,以使系统根据测量的可用性在着陆期间在不同任务之间切换。多个状态估计器用于融合来自不同来源的测量结果以获得更好的估计,因此,即使某些传感器输出出现故障或性能下降,UAV仍可以继续其当前任务。带有相关的基于测量的切换方案的车载控制器旨在闭合控制回路。对于第三项任务,采用拟议的系统来捕获无人机和模拟的移动舰船甲板之间的相对姿态,该舰船的运动使用3DoF移动平台基于生成的不同舰船运动数据集进行模拟。为了证明这一概念,引入了经典的计时器序列预测器来预测未来短时间内的船舶运动,并提出了分类器来确定安全自主着陆的机会。为验证该系统,对视觉系统进行了评估在存在挑战(例如遮挡和照明变化)的情况下,无论是预先捕获的图像还是实时图像。定量分析了相对姿态估计的精度。然后,通过进行实际飞行测试来检查和演示该集成系统,并将其测量结果与VICON运动捕捉系统基准进行比较。为了抓住最有利的降落条件,基于视觉测得的模拟3DoF舰船运动,对短期船舶运动进行了研究。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号