首页> 外文期刊>IEEE Transactions on Pattern Analysis and Machine Intelligence >Three-Dimensional Model-Based Object Recognition and Segmentation in Cluttered Scenes
【24h】

Three-Dimensional Model-Based Object Recognition and Segmentation in Cluttered Scenes

机译:三维场景中基于三维模型的目标识别与分割

获取原文
获取原文并翻译 | 示例

摘要

Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.
机译:在存在杂波和遮挡的情况下,独立于观点的自由形式对象及其分段识别是一项艰巨的任务。我们提出了一种新颖的基于3D模型的算法,该算法可以自动,高效地执行此任务。对象的3D模型是根据其多个无序范围图像(视图)自动离线构建的。这些视图将转换为多维表表示形式(我们称为张量)。通过使用基于哈希表的投票方案同时将视图的张量与其余视图的张量同时匹配,可以在这些视图之间自动建立对应关系。这样会生成一个相对转换的图形,用于在将视图集成到无缝3D模型之前注册视图。这些模型及其张量表示构成模型库。在线识别期间,通过投票将场景中的张量与库中的张量同时进行匹配。计算获得最多选票的模型张量的相似性度量。具有最高相似度的模型将转换到场景,如果它与场景中的对象精确对齐,则将该对象声明为可识别并进行分割。重复此过程,直到将场景完全分割为止。对包含55个模型和610个场景的真实和合成数据进行了实验,总体识别率达到95%。与自旋图像的比较表明,我们的算法在识别率和效率方面都优越。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号