首页> 外文OA文献 >Controlling Video Stimuli in Sign Language and Gesture Research: The OpenPoseR Package for Analyzing OpenPose Motion-Tracking Data in R
【2h】

Controlling Video Stimuli in Sign Language and Gesture Research: The OpenPoseR Package for Analyzing OpenPose Motion-Tracking Data in R

机译:控制视频刺激的手势和手势研究:用于分析R的OpenPoser包在r中调整运动跟踪数据

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Researchers in the fields of sign language and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material (e.g., duration of clips, encoding, framerate, etc.), leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor’s movements. Computer vision methods such as OpenPose enable the fitting of body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The OpenPoseR package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using OpenPose, allowing researchers in the fields of sign language and gesture studies to quantify the amount of motion (velocity and acceleration) pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata (e.g., duration, framerate, etc.) from large sets of video files.
机译:手势和姿态研究领域的研究人员经常向他们的参与者展示视频刺激,显示演员表演语言迹象或共同语音手势。到目前为止,这种视频刺激仅用于视频材料的一些技术方面(例如,剪辑的持续时间,编码,跨越等),留下了开放的可能性,可以是视频刺激材料的系统差异隐藏在演员运动的实际运动特性。诸如Opentosph这样的计算机视觉方法使得身体构成模型的拟合能够拟合视频剪辑的连续帧,从而可以使actor在特定视频剪辑中恢复移动的运动,而无需使用基于点或无标记的录制过程中的运动跟踪系统。 OpenPoser包提供了一种直接和可重复的方式,可以使用从视频剪辑中提取的这些身体构成模型数据使用uneponposp,允许手势和手势研究领域的研究人员来量化仅与之相关的运动量(速度和加速)由actor在视频剪辑中执行的运动。这些定量措施可用于控制刺激视频剪辑中的演员的运动的差异,或者例如在实验的不同条件之间。此外,该包还提供了一组用于生成用于数据可视化的绘图的功能,以及从大组视频文件自动提取元数据(例如,持续时间,帧等)的易于使用方式。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号