首页> 外文会议>International conference on human-computer interaction;International conference on human interface and the management of information >Investigation of Sign Language Motion Classification by Feature Extraction Using Keypoints Position of OpenPose
【24h】

Investigation of Sign Language Motion Classification by Feature Extraction Using Keypoints Position of OpenPose

机译:基于OpenPose关键点位置特征提取的手语运动分类研究

获取原文

摘要

So far. on the premise of using a monocular optical camera, sign language motion classification has been performed using a wristband and color gloves with different dyeing on each finger. In this method, the movement of sign language is detected by extracting the color region using color gloves. However, this method has problems such as the burden on the signer of wearing color gloves and the change in color extraction accuracy due to changes in ambient light, resulting in difficulty in ensuring stable classification accuracy. Therefore, we used OpenPose, which can detect the movements of both hands without the need for colored gloves, to classify sign language movements. Feature element extraction was performed using the keypoint position obtained from OpenPose. Then, we proposed three methods as feature element for classifying each motion and compared their classification accuracy. In method 1. feature element is obtained directly from the keypoint positions of the neck, shoulder, elbow, and wrist. Method 2 is a scheme of obtaining from the relative distance from the target keypoint position around the neck. In method 3. the feature element was 30 elements, which is the sum of the 24 elements obtained in method 1 and the 6 elements obtained in method 2. In the classification experiment, cross-validation was performed using the feature quantity obtained from the sign language motion videos of five people, and the accuracy of each method was investigated. In method 1, B (68.05%), A (62.56%), C (62.19%), D (61.49%), E (56.75%). average 62.21%, in order from the signer with the highest average classification accuracy. In method 2, B (75.31%), A (75.09%), D (73.28%), E (69.97%), C (69.81%), average 72.69%. Method 3 gave B (70.72%), A (69.65%), C (66.13%), D (64.27%), E (62.72%), and an average of 66.30% classification accuracy.
机译:目前为止在使用单目光学相机的前提下,使用腕带和在每个手指上具有不同染色的彩色手套执行手语运动分类。在该方法中,通过使用颜色手套提取颜色区域来检测手语的移动。然而,该方法存在着诸如佩戴彩色手套的签名者的负担以及由于环境光的变化而导致颜色提取精度的变化等问题,导致难以确保稳定的分类精度。因此,我们使用OpenPose对手语动作进行分类,OpenPose可以检测双手的动作,而无需戴彩色手套。使用从OpenPose获得的关键点位置进行特征元素提取。然后,我们提出了三种方法作为特征元素对每个运动进行分类,并比较了它们的分类精度。方法1。特征元素直接从颈部、肩部、肘部和手腕的关键点位置获取。方法2是从颈部周围目标关键点位置的相对距离获取的方案。方法3。特征元素是30个元素,是方法1中获得的24个元素和方法2中获得的6个元素的总和。在分类实验中,使用从五个人的手语运动视频中获得的特征量进行交叉验证,并考察每种方法的准确性。方法1中,B(68.05%)、A(62.56%)、C(62.19%)、D(61.49%)、E(56.75%)。平均62.21%,以签名者的平均分类准确率最高。在方法2中,B(75.31%)、A(75.09%)、D(73.28%)、E(69.97%)、C(69.81%)平均为72.69%。方法3给出了B(70.72%)、A(69.65%)、C(66.13%)、D(64.27%)、E(62.72%)和平均66.30%的分类准确率。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号