首页> 外文会议> >Multi-sensor fusion model for constructing internal representation using autoencoder neural networks
【24h】

Multi-sensor fusion model for constructing internal representation using autoencoder neural networks

机译:利用自动编码器神经网络构建内部表示的多传感器融合模型

获取原文

摘要

In this paper, we propose a multi-sensor fusion model using an autoencoder neural network for 3D object recognition, which fuses multiple sensory data to integrate its internal object representation. This model was evaluated using camera images from many viewpoints on a hemisphere around the target. Three images were generated from one camera image by hue and saturation value clusters. After learning the target's images from many viewpoints in an autoencoder neural network, the continuous internal representations which correspond to viewpoints, were constructed in a compress layer of the autoencoder neural network. We found that the internal representation is generalized about the viewpoints which were not in the training sets of the target. The average of the squared errors of the autoencoder neural network is about three times higher when the compared object is unknown than when the object has already been taught as the target but not the learning point. Results of the experiment demonstrate the effectiveness of our proposed model to 3D object recognition.
机译:在本文中,我们提出了一种使用自动编码器神经网络进行3D对象识别的多传感器融合模型,该模型融合了多个传感数据以集成其内部对象表示。从目标周围半球的许多角度使用相机图像评估了该模型。通过色相和饱和度值群集从一个摄像机图像生成三张图像。在自编码器神经网络中从多个视点学习目标的图像后,在自编码器神经网络的压缩层中构造了对应于视点的连续内部表示。我们发现内部表示是针对目标训练集中未包含的观点进行了概括。当比较对象未知时,自编码器神经网络的平方误差的平均值大约是已将对象作为目标而不是学习点的对象的三倍。实验结果证明了我们提出的模型对3D对象识别的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号