首页> 外文会议>AAAI Conference on Artificial Intelligence >STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits
【24h】

STEP: Spatial Temporal Graph Convolutional Networks for Emotion Perception from Gaits

机译:步骤:空间临时图卷积网络,用于GAITS的情感感知

获取原文

摘要

We present a novel classifier network called STEP, to classify perceived human emotion from gaits, based on a Spatial Temporal Graph Convolutional Network (ST-GCN) architecture. Given an RGB video of an individual walking, our formulation implicitly exploits the gait features to classify the perceived emotion of the human into one of four emotions: happy, sad, angry, or neutral. We train STEP on annotated real-world gait videos, augmented with annotated synthetic gaits generated using a novel generative network called STEP-Gen, built on an ST-GCN based Conditional Variational Autoencoder (CVAE). We incorporate a novel push-pull regularization loss in the CVAE formulation of STEP-Gen to generate realistic gaits and improve the classification accuracy of STEP. We also release a novel dataset (E-Gait), which consists of 4,227 human gaits annotated with perceived emotions along with thousands of synthetic gaits. In practice, STEP can learn the affective features and exhibits classification accuracy of 88% on E-Gait, which is 14-30% more accurate over prior methods.
机译:我们提出了一种新颖的分类器网络,称为步骤,根据空间时间图卷积网络(ST-GCN)架构对Gaits感知的人类情感进行分类。鉴于个人走路的RGB视频,我们的制定隐含地利用步态特征将人类的感知情绪分类为四种情绪中的一种:快乐,悲伤,生气或中立。我们训练了通过使用名为STEP-Gen的新型生成网络生成的注释的现实世界步态视频的训练了,基于基于ST-GCN的条件变形Autiachoder(CVAE)。我们在步进基因的CVAE制剂中纳入了一种新的推拉正规化损失,以产生现实的Gaits,提高步骤的分类准确性。我们还释放了一个小说数据集(电子步态),它由4,227人的Gaits组成,其中包含了千种人的情感。在实践中,步骤可以学习情感特征,并在E-Gait上表现出88%的分类精度,比以前的方法更准确为14-30%。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号