【24h】

Multimodal fusion for sensor data using stacked autoencoders

机译:使用堆叠式自动编码器对传感器数据进行多峰融合

获取原文

摘要

As sensor networks collect different kinds of data to make better-informed decisions, we need multimodal fusion for basic analytical tasks such as event prediction, error reduction and data compression. Inspired by unsupervised feature discovery methods in deep learning, we propose an approach based on the stacked autoencoder: a multi-layer feed-forward neural network. After extracting key features from multimodal sensor data, the algorithm computes compact representations which can also be used directly in analytical tasks. Using simulated and real-world environmental data, we evaluate the performance of our approach for data fusion and regression. We demonstrate improvements over situations where only single modality data is available and where multimodal data are fused together without learning intermodality correlations. For regression, we attained more than 45% improvement in root-mean-square errors (RMSE) over linear approaches, and up to 10% improvement over shallow neural network methods.
机译:随着传感器网络收集不同种类的数据以做出更明智的决策,我们需要多模式融合来完成基本的分析任务,例如事件预测,错误减少和数据压缩。受深度学习中无监督特征发现方法的启发,我们提出了一种基于堆叠式自动编码器的方法:多层前馈神经网络。从多模式传感器数据中提取关键特征后,该算法将计算出紧凑的表示形式,这些表示形式也可以直接用于分析任务中。使用模拟的和真实的环境数据,我们评估了数据融合和回归方法的性能。我们展示了在只有单个模态数据可用且多模态数据融合在一起而无需学习模态相关性的情况下的改进。对于回归,与线性方法相比,我们的均方根误差(RMSE)改善了45%以上,而与浅层神经网络方法相比,则提高了10%以上。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号