首页> 外文期刊>The Visual Computer >Weakly supervised deep network for spatiotemporal localization and detection of human actions in wild conditions
【24h】

Weakly supervised deep network for spatiotemporal localization and detection of human actions in wild conditions

机译:弱势监督的深网络用于野外条件下的时尚本地化和检测人类行为

获取原文
获取原文并翻译 | 示例
       

摘要

Human action localization in any long, untrimmed video can be determined from where and what action takes place in a given video segment. The main hurdles in human action localization are the spatiotemporal randomnesses of their happening in a parallel mode which means the location (a particular set of frames containing action instances) and duration of any particular action in real-life video sequences generally are not fixed. At another end, the uncontrolled conditions such as occlusions, viewpoints and motions at the crisp boundary of the action sequences demand to develop a fast deep network which can be easily trained from unlabeled samples of complex video sequences. Motivated from the facts, we proposed a weakly supervised deep network model for human action localization. The model is trained from unlabeled action samples from UCF50 action benchmark. The five-channel data obtained from the concatenation of RGB (three-channel) and optical flow vectors (two-channel) are fed to the proposed convolutional neural network. LSTM network is used to yield the region of action happening area. The performance of the model is tested on UCF-sports dataset. The observation and comparative results reflect that our model can localize any action from annotation-free data samples captured in uncontrolled conditions.
机译:人体行动定位在任何长时间的未经监测的视频中可以从给定的视频段中的位置和行动确定。人类行动定位的主要障碍是它们在平行模式中发生的时空随机性,这意味着现场(包含动作实例的特定帧)和现实视频序列中的任何特定动作的持续时间通常不是固定的。在另一端,诸如动作序列的清晰边界处的闭塞,观点和动作的不受控制的条件要求开发一种快速深度网络,这可以容易地从复杂视频序列的未标记样本培训。有动力的事实,我们提出了一种弱监督的人类行动本地网络模型。该模型从UCF50动作基准从UCF50动作基准中的未标记动作样本培训。从RGB(三通道)和光学流量矢量(双通道)的串联获得的五通道数据被馈送到所提出的卷积神经网络。 LSTM网络用于产生动作发生的区域。在UCF-Sport数据集中测试了模型的性能。观察和比较结果反映了我们的模型可以通过在不受控制的条件下捕获的无注释数据样本中定位任何动作。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号