首页> 外文会议>SAE AeroTech Congress Exhibition >Multi-Sensor Data Fusion Techniques for RPAS Detect, Track and Avoid
【24h】

Multi-Sensor Data Fusion Techniques for RPAS Detect, Track and Avoid

机译:用于RPA检测,轨道和避免的多传感器数据融合技术

获取原文

摘要

Accurate and robust tracking of objects is of growing interest amongst the computer vision scientific community. The ability of a multi-sensor system to detect and track objects, and accurately predict their future trajectory is critical in the context of mission- and safety-critical applications. Remotely Piloted Aircraft System (RPAS) are currently not equipped to routinely access all classes of airspace since certified Detect-and-Avoid (DAA) systems are yet to be developed. Such capabilities can be achieved by incorporating both cooperative and non-cooperative DAA functions, as well as providing enhanced communications, navigation and surveillance (CNS) services. DAA is highly dependent on the performance of CNS systems for Detection, Tacking and avoiding (DTA) tasks and maneuvers. In order to perform an effective detection of objects, a number of high performance, reliable and accurate avionics sensors and systems are adopted including non-cooperative sensors (visual and thermal cameras, Laser radar (LIDAR) and acoustic sensors) and cooperative systems (Automatic Dependent Surveillance-Broadcast (ADS-B) and Traffic Collision Avoidance System (TCAS)). In this paper the sensors and system information candidates are fully exploited in a Multi-Sensor Data Fusion (MSDF) architecture. An Unscented Kalman Filter (UKF) and a more advanced Particle Filter (PF) are adopted to estimate the state vector of the objects based for maneuvering and non-maneuvering DTA tasks. Furthermore, an artificial neural network is conceptualised/adopted to exploit the use of statistical learning methods, which acts to combined information obtained from the UKF and PF. After describing the MSDF architecture, the key mathematical models for data fusion are presented. Conceptual studies are carried out on visual and thermal image fusion architectures.
机译:对象的准确性和稳健跟踪是计算机视觉科学界之间的兴趣。多传感器系统检测和跟踪对象的能力,并准确地预测其未来的轨迹在使命和安全关键应用程序的背景下至关重要。远程驾驶的飞机系统(RPA)目前没有配备,以便于自认证的检测和避免(DAA)系统尚未开发出来的所有类空域。通过结合合作和非协作DAA功能,可以实现这种能力,以及提供增强的通信,导航和监视(CNS)服务。 DAA高度依赖于CNS系统进行检测,加密和避免(DTA)任务和机动的性能。为了执行有效的物体检测,采用了许多高性能,可靠和准确的航空电子传感器和系统,包括非协作传感器(视觉和热摄像头,激光雷达(LIDAR)和声学传感器)和协作系统(自动依赖监视广播(ADS-B)和交通碰撞避免系统(TCAS))。在本文中,传感器和系统信息候选人在多传感器数据融合(MSDF)架构中充分利用。采用未加注的卡尔曼滤波器(UKF)和更高级的粒子滤波器(PF)来估计基于机动和非操纵DTA任务的对象的状态向量。此外,人工神经网络被概念化/采用以利用统计学习方法的使用,其用于从UKF和PF获得的组合信息。在描述MSDF架构之后,提出了数据融合的关键数学模型。概念研究是在视觉和热图像融合架构进行的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号