...
首页> 外文期刊>Neurocomputing >Understanding human activities in videos: A joint action and interaction learning approach
【24h】

Understanding human activities in videos: A joint action and interaction learning approach

机译:了解视频中的人类活动:联合行动和互动学习方法

获取原文
获取原文并翻译 | 示例
           

摘要

In video surveillance with multiple people, human interactions and their action categories preserve strong correlations, and the identification of interaction configuration is of significant importance to the success of action recognition task. Interactions are typically estimated using heuristics or treated as latent variables. However, the former usually introduces incorrect interaction configuration while the latter amounts to solve challenging optimization problems. Here we address these problems systematically by proposing a novel structured learning framework which enables the joint prediction of actions and interactions. To this end, both the features learned via deep nets and human interaction context are leveraged to encode the correlations among actions and pairwise interactions in a structured model, and all model parameters are trained via a large-margin framework. To solve the associated inference problem, we present two optimization algorithms, one is alternating search and the other is belief propagation. Experiments on both synthetic and real dataset demonstrate the strength of the proposed approach. (C) 2018 Elsevier B.V. All rights reserved.
机译:在多人视频监视中,人机交互及其动作类别保持强相关性,而交互配置的识别对于动作识别任务的成功至关重要。通常使用试探法估计交互作用或将其视为潜在变量。但是,前者通常会引入错误的交互配置,而后者则足以解决具有挑战性的优化问题。在这里,我们通过提出一种新颖的结构化学习框架来系统地解决这些问题,该框架能够共同预测动作和相互作用。为此,通过深度网络和人类交互上下文学习的特征都被用来对结构化模型中的动作和成对交互之间的相关性进行编码,并且所有模型参数都是通过大幅度的框架进行训练的。为了解决相关的推理问题,我们提出了两种优化算法,一种是交替搜索,另一种是信念传播。在合成数据集和真实数据集上的实验都证明了该方法的优势。 (C)2018 Elsevier B.V.保留所有权利。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号