首页> 外文期刊>Image and vision computing >MLRMV: Multi-layer representation for multi-view action recognition
【24h】

MLRMV: Multi-layer representation for multi-view action recognition

机译:MLRMV: Multi-layer representation for multi-view action recognition

获取原文
获取原文并翻译 | 示例
           

摘要

Daily action recognition has gained much interest in computer vision. However, viewpoint changes will lead to sizable intra-class differences in the same action. To deal with this problem, we propose a novel multi-view daily action recognition approach based on the multi-layer representation. In use of motion atoms and motion phrases, we construct the middle-level feature representations in multi-view daily actions. A multi-view unsupervised discriminative clustering method is proposed for constructing motion atoms, and the classification accuracy of motion atoms is improved by jointly learning atom dictionaries and the classifier. Moreover, we present discontinuous temporal scale motion phrases and a grading mechanism of motion phrases to strengthen the representative ability of motion phrases and the final recognition accuracy. Finally, the experimental results based on the WVU dataset, the NTU RGB-D dataset, and N-UCLA dataset show that the proposed methods have the state-of-the-art performance, compared with the classic methods such as IDT, MoFAP, JLMF, and so on. (c) 2021 Elsevier B.V. All rights reserved.

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号