首页> 外文期刊>Nuclear Instruments & Methods in Physics Research >Data fusion for a vision-aided radiological detection system: Calibration algorithm performance
【24h】

Data fusion for a vision-aided radiological detection system: Calibration algorithm performance

机译:视觉辅助放射学检测系统的数据融合:校准算法性能

获取原文
获取原文并翻译 | 示例

摘要

In order to improve the ability to detect, locate, track and identify nuclear/radiological threats, the University of Florida nuclear detection community has teamed up with the 3D vision community to collaborate on a low cost data fusion system. The key is to develop an algorithm to fuse the data from multiple radiological and 3D vision sensors as one system. The system under development at the University of Florida is being assessed with various types of radiological detectors and widely available visual sensors. A series of experiments were devised utilizing two EJ-309 liquid organic scintillation detectors (one primary and one secondary), a Microsoft Kinect for Windows v2 sensor and a Velodyne HDL-32E High Definition LiDAR Sensor which is a highly sensitive vision sensor primarily used to generate data for self-driving cars. Each experiment consisted of 27 static measurements of a source arranged in a cube with three different distances in each dimension. The source used was Cf-252. The calibration algorithm developed is utilized to calibrate the relative 3D-location of the two different types of sensors without need to measure it by hand; thus, preventing operator manipulation and human errors. The algorithm can also account for the facility dependent deviation from ideal data fusion correlation. Use of the vision sensor to determine the location of a sensor would also limit the possible locations and it does not allow for room dependence (facility dependent deviation) to generate a detector pseudo-location to be used for data analysis later. Using manually measured source location data, our algorithm-predicted the offset detector location within an average of 20 cm calibration-difference to its actual location. Calibration-difference is the Euclidean distance from the algorithm predicted detector location to the measured detector location. The Kinect vision sensor data produced an average calibration-difference of 35 cm and the HDL-32E produced an average calibration-difference of 22 cm. Using NaI and He-3 detectors in place of the EJ-309, the calibration-difference was 52 cm for NaI and 75 cm for He-3. The algorithm is not detector dependent; however, from these results it was determined that detector dependent adjustments are required.
机译:为了提高检测,定位,跟踪和识别核/放射威胁的能力,佛罗里达大学核检测社区与3D视觉社区合作,共同开发了一种低成本的数据融合系统。关键是要开发一种算法,将来自多个放射和3D视觉传感器的数据融合为一个系统。佛罗里达大学正在开发的系统正在使用各种类型的放射探测器和广泛使用的视觉传感器进行评估。利用两个EJ-309液体有机闪烁探测器(一个主传感器和一个辅助传感器),Microsoft Kinect for Windows v2传感器和Velodyne HDL-32E高清晰度LiDAR传感器(主要用于以下目的的高灵敏度视觉传感器)进行了一系列实验:生成自动驾驶汽车的数据。每个实验均由27个静态测量的源组成,这些源以立方体的形式排列,每个维度上具有三个不同的距离。使用的来源是Cf-252。开发的校准算法用于校准两种不同类型传感器的相对3D位置,而无需手动进行测量。因此,可以防止操作员操纵和人为错误。该算法还可以考虑与理想数据融合相关性相关的与设施有关的偏差。使用视觉传感器来确定传感器的位置也将限制可能的位置,并且不允许房间依赖性(与设施有关的偏差)来生成检测器伪位置,以用于以后的数据分析。通过使用手动测量的源位置数据,我们的算法可预测偏移检测器的位置,该位置平均在与实际位置相差20 cm的校准差之内。校准差是从算法预测的检测器位置到测量的检测器位置的欧几里得距离。 Kinect视觉传感器数据产生的平均校准差为35 cm,而HDL-32E产生的平均校准差为22 cm。使用NaI和He-3探测器代替EJ-309,NaI的校准差为52 cm,He-3的校准差为75 cm。该算法与检测器无关;但是,从这些结果可以确定需要依赖于检测器的调整。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号