首页> 外文期刊>Human-Machine Systems, IEEE Transactions on >An Ego-Vision System for Hand Grasp Analysis
【24h】

An Ego-Vision System for Hand Grasp Analysis

机译:用于手抓分析的自我视觉系统

获取原文
获取原文并翻译 | 示例
           

摘要

This paper presents an egocentric vision (ego-vision) system for hand grasp analysis in unstructured environments. Our goal is to automatically recognize hand grasp types and to discover the visual structures of hand grasps using a wearable camera. In the proposed system, free hand–object interactions are recorded from a first-person viewing perspective. State-of-the-art computer vision techniques are used to detect hands and extract hand-based features. A new feature representation that incorporates hand tracking information is also proposed. Then, grasp classifiers are trained to discriminate among different grasp types from a predefined grasp taxonomy. Based on the trained grasp classifiers, visual structures of hand grasps are learned using an iterative grasp clustering method. In experiments, grasp recognition performance in both laboratory and real-world scenarios is evaluated. The best classification accuracy our system achieves is 92% and 59%, respectively. System generality to different tasks and users is also verified by the experiments. Analysis in a real-world scenario shows that it is possible to automatically learn intuitive visual grasp structures that are consistent with expert-designed grasp taxonomies.
机译:本文提出了一个以自我为中心的视觉(ego-vision)系统,用于非结构化环境中的手部抓握分析。我们的目标是使用可穿戴式相机自动识别手握类型并发现手握的视觉结构。在提出的系统中,从第一人称视角观看了自由手与对象的交互。最先进的计算机视觉技术用于检测手并提取基于手的特征。还提出了包含手部跟踪信息的新特征表示。然后,训练抓握分类器,以从预定义的抓握分类法中区分不同的抓握类型。基于训练的抓握分类器,使用迭代抓握聚类方法学习手抓握的视觉结构。在实验中,评估在实验室和现实情况下的抓握识别性能。我们的系统实现的最佳分类精度分别为92%和59%。实验还验证了系统对不同任务和用户的通用性。实际场景中的分析表明,可以自动学习与专家设计的抓握分类法一致的直观视觉抓握结构。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号