首页> 外文会议>International Conference on Electrical and Electronics Engineering >Anatomical Planes-Based Representation for Recognizing Two-Person Interactions from Partially Observed Video Sequences: A Feasibility Study
【24h】

Anatomical Planes-Based Representation for Recognizing Two-Person Interactions from Partially Observed Video Sequences: A Feasibility Study

机译:基于解剖的平面的表示,用于识别部分观察到的视频序列的双人交互:可行性研究

获取原文
获取外文期刊封面目录资料

摘要

This paper presents a new approach for two-person interaction recognition from partially observed video sequences. The proposed approach employs the 3D joint positions captured by a Microsoft Kinect sensor to construct a view-invariant anatomical planes-based descriptor, called the two-person motion-pose geometric descriptor (TP-MPGD), that quantifies the activities performed by two interacting persons at each video frame. Using the TP-MPGDs extracted from the frames of the input videos, we construct a two-phase classification framework to recognize the class of the interaction performed by two persons. The performance of the proposed approach has been evaluated using a publicly available interaction dataset that comprises the 3D joint positions data recorded using the Kinect sensor for 21 pairs of subjects while performing eight interactions. Moreover, we have developed five different evaluation scenarios, including one evaluation scenario that is based on fully observed video sequences and four other evaluation scenarios that are based on partially observed video sequences. The classification accuracies obtained for each of the five evaluation scenarios demonstrate the feasibility of our proposed approach to recognize two-person interactions from fully observed and partially observed video sequences.
机译:本文介绍了来自部分观察到的视频序列的双人交互识别的新方法。所提出的方法采用Microsoft Kinect传感器捕获的3D关节位置来构造基于视图不变的解剖平面的描述符,称为双人运动姿态几何描述符(TP-MPGD),其量化由两个交互执行的活动每个视频框架的人。使用从输入视频的帧中提取的TP-MPGDS,我们构建了一个两阶段分类框架,以识别两个人执行的交互类。已经使用公开的交互数据集进行了评估了所提出的方法的性能,该数据集包括使用Kinect传感器记录的3D接头位置数据,在执行八个相互作用的同时进行21对对象。此外,我们开发了五种不同的评估场景,包括一种基于完全观察到的视频序列的评估方案和基于部分观察到的视频序列的四种评估场景。为五个评估方案中的每一个获得的分类准确性展示了我们提出的方法识别从完全观察和部分观察到的视频序列中的双人交互的可行性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号