首页> 外文会议>European conference on computer vision >Shuffle and Learn: Unsupervised Learning Using Temporal Order Verification
【24h】

Shuffle and Learn: Unsupervised Learning Using Temporal Order Verification

机译:随机播放和学习:使用时间顺序验证的无监督学习

获取原文

摘要

In this paper, we present an approach for learning a visual representation from the raw spatiotemporal signals in videos. Our representation is learned without supervision from semantic labels. We formulate our method as an unsupervised sequential verification task, i.e., we determine whether a sequence of frames from a video is in the correct temporal order. With this simple task and no semantic labels, we learn a powerful visual representation using a Convolutional Neural Network (CNN). The representation contains complementary information to that learned from supervised image datasets like ImageNet. Qualitative results show that our method captures information that is temporally varying, such as human pose. When used as pre-training for action recognition, our method gives significant gains over learning without external data on benchmark datasets like UCF101 and HMDB51. To demonstrate its sensitivity to human pose, we show results for pose estimation on the FLIC and MPII datasets that are competitive, or better than approaches using significantly more supervision. Our method can be combined with supervised representations to provide an additional boost in accuracy.
机译:在本文中,我们提出了一种从视频中的原始时空信号中学习视觉表示的方法。我们的表示是在没有语义标签监督的情况下学习的。我们将我们的方法表述为无监督的顺序验证任务,即确定视频中的帧序列是否处于正确的时间顺序。有了这个简单的任务,没有语义标签,我们就可以使用卷积神经网络(CNN)学习强大的视觉表示。表示包含补充信息,这些信息是从像ImageNet这样的受监管图像数据集中学到的信息。定性结果表明,我们的方法可捕获随时间变化的信息,例如人体姿势。当用作动作识别的预训练时,我们的方法比没有UCF101和HMDB51等基准数据集上的外部数据的学习具有明显的优势。为了证明其对人体姿势的敏感性,我们在具有竞争性的FLIC和MPII数据集上显示了姿势估计的结果,或者比使用更多监督的方法要好。我们的方法可以与监督表示相结合,以进一步提高准确性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号