首页> 外文会议>German Radar Symposium GRS 2000, Oct 11-12, 2000, Berlin, Germany >Reliable Feature Extraction from MMW Radar Images for Navigation during Approach and Landing
【24h】

Reliable Feature Extraction from MMW Radar Images for Navigation during Approach and Landing

机译:从毫米波雷达图像中可靠提取特征,以便在进近和着陆期间导航

获取原文
获取原文并翻译 | 示例

摘要

Approaches and landing maneuvers are some of the most critical tasks in aviation. Especially under adverse weather conditions (when the runway can 't be seen) pilots need additonal information to improve their situational awareness. Therefore, a new-prospering field of aircraft guidance research called Enhanced Vision Systems (EVS) has been established. Generally, EV systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view and its accuracy and reliability depend on precise databases as well as on accurate navigation data, e. g. provided by differential GPS (DGPS) or by instrumental landing systems (ILS). The additional use of forward looking imaging sensors offers the possibility to detect unexpected obstacles, to monitor the integrity of databases and navigation data and to extract navigation information, e. g. the relative position of runway and aircraft, directly from the sensor data. The latter is very important if no ILS or DGPS is available, especially under adverse weather conditions. Up to now, the most promising EV sensor due to its lowest weather dependency compared to other imaging sensors seems to be the Hi Vision sensor from DASA Ulm, a 35 GHz MMW radar with a frame rate of about 16 Hz. For navigation purposes during approach and landing the position of runway relative to the aircraft is one of the most important information for pilots. Consequently, a robust and reliable extraction of runway structures out of radar images is a basic requirement to derive this navigation information from radar data. In this contribution we present the entire sensor data processing chain: the sensor calibration, low-level processing routines and the integration/fusion of the extracted information in/with additional knowledge. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
机译:进场和着陆演习是航空业最关键的任务。特别是在恶劣的天气条件下(当看不到跑道时),飞行员需要补充信息以提高其态势感知能力。因此,建立了飞机导航研究的新领域,称为增强视觉系统(EVS)。通常,EV系统由两个主要部分组成:传感器视觉和合成视觉。合成视觉通常会生成虚拟的窗口视图,其准确性和可靠性取决于精确的数据库以及精确的导航数据,例如。 G。由差分GPS(DGPS)或仪表着陆系统(ILS)提供。前瞻性成像传感器的额外使用提供了检测意外障碍,监视数据库和导航数据的完整性以及提取导航信息(例如导航)的可能性。 G。直接从传感器数据获得跑道和飞机的相对位置。如果没有ILS或DGPS,尤其是在不利的天气条件下,后者非常重要。到目前为止,由于与其他成像传感器相比对天气的依赖性最低,因此最有前途的EV传感器似乎是DASA Ulm的Hi Vision传感器,这是一种35 GHz MMW雷达,帧频约为16 Hz。为了进近和着陆期间的导航目的,跑道相对于飞机的位置是飞行员最重要的信息之一。因此,从雷达图像中可靠,可靠地提取跑道结构是从雷达数据中获取导航信息的基本要求。在此贡献中,我们介绍了整个传感器数据处理链:传感器校准,低级处理例程以及所提取信息的集成/融合(带有/具有其他知识)。在对德国北部几个机场进行的广泛飞行测试中获得的真实数据证明了我们方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号