为了精确、高效地检索人体运动数据库,将三维人体运动捕获数据表示成类似于文本的形式,提出一种基于内容的运动检索方法.首先对人体上/下半身两部分数据分别提取关键帧,并进行相似传播聚类分析,获得数据中最具代表性的一组人体姿势,称之为运动词汇;然后将运动片段的每一帧都替换成运动词汇中与其最相近的姿势来构建运动文档,利用Bigram向量空间模型对人体运动进行检索.整个算法流程不需要人为干预,能够自动完成对已分割运动数据片段的索引.实验结果表明,与现有方法相比,文中方法具有更高的检索精度和召回率.%For the sake of efficiency and accuracy in retrieving human motions, we model human motion capture data in the form of textual documents and propose a content-based motion retrieval method based on vector space model. In the approach, motion-vocabulary, substantially the most representative human poses, is firstly obtained by applying affinity propagation on the key-pose set which is extracted in advance from database according to upper and lower body respectively. Then motion clips can be represented as motion-documents by replacing each original pose with its closest pose in the motion-vocabulary. Finally we use Bigram vector space model to measure similarities among motions. The approach can automatically index pre-segmented motion clips without human involvement. Experimental results demonstrate the advantages of our approach, both in retrieval accuracy and recall.
展开▼