首页> 外文会议>European conference on computer vision >Extending Long Short-Term Memory for Multi-View Structured Learning
【24h】

Extending Long Short-Term Memory for Multi-View Structured Learning

机译:扩展长期内存以进行多视图结构学习

获取原文

摘要

Long Short-Term Memory (LSTM) networks have been successfully applied to a number of sequence learning problems but they lack the design flexibility to model multiple view interactions, limiting their ability to exploit multi-view relationships. In this paper, we propose a Multi-View LSTM (MV-LSTM), which explicitly models the view-specific and cross-view interactions over time or structured outputs. We evaluate the MV-LSTM model on four publicly available datasets spanning two very different structured learning problems: multimodal behaviour recognition and image captioning. The experimental results show competitive performance on all four datasets when compared with state-of-the-art models.
机译:长期内记忆(LSTM)网络已成功应用于许多序列学习问题,但它们缺乏模拟多视图交互的设计灵活性,限制了它们利用多视图关系的能力。在本文中,我们提出了一种多视图LSTM(MV-LSTM),其明确地模拟了随时间或结构化输出的视图特定和跨视图交互。我们在跨越两个非常不同的结构学习问题的四个公共数据集中评估MV-LSTM模型:多式联行为识别和图像标题。与最先进的模型相比,实验结果显示了所有四个数据集的竞争性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号