首页> 外文会议>International Conference on Computer Vision Theory and Applications >Anatomical Landmark Tracking by One-shot Learned Priors for Augmented Active Appearance Models
【24h】

Anatomical Landmark Tracking by One-shot Learned Priors for Augmented Active Appearance Models

机译:一击学习前沿用于增强主动外观模型的解剖标志性跟踪

获取原文

摘要

For animal bipedal locomotion analysis, an immense amount of recorded image data has to be evaluated by biological experts. During this time-consuming evaluation single anatomical landmarks have to be annotated in each image. In this paper we reduce this effort by automating the annotation with a minimum level of user interaction. Recent approaches, based on Active Appearance Models, are improved by priors based on anatomical knowledge and an online tracking method, requiring only a single labeled frame. However, the limited search space of the online tracker can lead to a template drift in case of severe self-occlusions. In contrast, we propose a one-shot learned tracking-by-detection prior which overcomes the shortcomings of template drifts without increasing the number of training data. We evaluate our approach based on a variety of real-world X-ray locomotion datasets and show that our method outperforms recent state-of-the-art concepts for the task at hand.
机译:对于动物双向运动运动,必须通过生物专家评估巨大的记录图像数据。在此耗时的评估期间,单个解剖标志必须在每个图像中注释。在本文中,我们通过使用最低水平的用户交互自动化注释来减少这项努力。基于主动外观模型的最新方法是基于解剖知识和在线跟踪方法的前沿得到改善,只需要单个标记的帧。但是,在线跟踪器的有限搜索空间可能导致在严重的自闭锁的情况下导致模板漂移。相比之下,我们提出了一次性学习的逐次检测,该检测克服了模板漂移的缺点而不增加训练数据的数量。我们根据各种现实世界X射线运动数据集评估我们的方法,并表明我们的方法优于最近近期最先进的概念。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号