【24h】

A new tool for gestural action recognition to support decisions in emotional framework

机译:一种新的识别行动认可,以支持情绪框架的决策

获取原文

摘要

Introduction and objective: the purpose of this work is to design and implement an innovative tool to recognize 16 different human gestural actions and use them to predict 7 different emotional states. The solution proposed in this paper is based on RGB and depth information of 2D/3D images acquired from a commercial RGB-D sensor called Kinect. Materials: the dataset is a collection of several human actions made by different actors. Each action is performed by each actor for three times in each video. 20 actors perform 16 different actions, both seated and upright, totalling 40 videos per actor. Methods: human gestural actions are recognized by means feature extractions as angles and distances related to joints of human skeleton from RGB and depth images. Emotions are selected according to the state-of-the-art. Experimental results: despite truly similar actions, the overall-accuracy reached is approximately 80%. Conclusions and future works: the proposed work seems to be back-ground- and speed-independent, and it will be used in the future as part of a multimodal emotion recognition software based on facial expressions and speech analysis as well.
机译:介绍和目标:这项工作的目的是设计和实施创新工具,以识别16种不同的人类手势行动,并使用它们来预测7个不同的情绪状态。本文提出的解决方案基于从名为Kinect的商业RGB-D传感器获取的2D / 3D图像的RGB和深度信息。材料:数据集是由不同行为者制作的几种人类行为的集合。每个actor执行每个动作在每个视频中进行三次。 20个演员执行16个不同的动作,坐着和直立,总计40个视频。方法:通过来自RGB和深度图像的人骨架关节的角度和距离来识别人类的特征提取。根据最先进的方式选择情绪。实验结果:尽管具有真正类似的行为,但达到的总体准确性约为80%。结论和未来作品:拟议的工作似乎是背面和速度无关的,它将在未来使用,作为基于面部表情和语音分析的多模式情感识别软件的一部分。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号