首页> 外文会议>IEEE International Conference on Neural Networks >Multi-sensor fusion model for constructing internal representation using autoencoder neural networks
【24h】

Multi-sensor fusion model for constructing internal representation using autoencoder neural networks

机译:使用AutoEncoder神经网络构建内部表示的多传感器融合模型

获取原文
获取外文期刊封面目录资料

摘要

In this paper, we propose a multi-sensor fusion model using an autoencoder neural network for 3D object recognition, which fuses multiple sensory data to integrate its internal object representation. This model was evaluated using camera images from many viewpoints on a hemisphere around the target. Three images were generated from one camera image by hue and saturation value clusters. After learning the target's images from many viewpoints in an autoencoder neural network, the continuous internal representations which correspond to viewpoints, were constructed in a compress layer of the autoencoder neural network. We found that the internal representation is generalized about the viewpoints which were not in the training sets of the target. The average of the squared errors of the autoencoder neural network is about three times higher when the compared object is unknown than when the object has already been taught as the target but not the learning point. Results of the experiment demonstrate the effectiveness of our proposed model to 3D object recognition.
机译:在本文中,我们提出了一种使用AutoEncoder神经网络的多传感器融合模型,用于3D对象识别,其熔化多个感官数据以集成其内部对象表示。使用来自目标周围的半球上的许多观点来评估该模型。通过色调和饱和值集群从一个相机图像生成三个图像。在从自动统计学器神经网络中的许多观点来看,在自动统计学网络神经网络中的许多观点来看,对应于观点的连续内部表示是在AutoEncoder神经网络的压缩层中构建的。我们发现内部表示是关于不在目标训练集中的观点概括。当比较的对象未知时,AutoEncoder神经网络的平均误差的平均值比对象已经被教导为目标但不是学习点时,大约三倍。实验结果证明了我们提出模型到3D对象识别的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号