...
首页> 外文期刊>Intelligent Service Robotics >HANDS: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands
【24h】

HANDS: a multimodal dataset for modeling toward human grasp intent inference in prosthetic hands

机译:手:用于对假肢手机掌握意图推断建模的多峰数据集

获取原文
获取原文并翻译 | 示例
           

摘要

Upper limb and hand functionality is critical to many activities of daily living, and the amputation of one can lead to significant functionality loss for individuals. From this perspective, advanced prosthetic hands of the future are anticipated to benefit from improved shared control between a robotic hand and its human user, but more importantly from the improved capability to infer human intent from multimodal sensor data to provide the robotic hand perception abilities regarding the operational context. Such multimodal sensor data may include various environment sensors including vision, as well as human physiology and behavior sensors including electromyography and inertial measurement units. A fusion methodology for environmental state and human intent estimation can combine these sources of evidence in order to help prosthetic hand motion planning and control. In this paper, we present a dataset of this type that was gathered with the anticipation of cameras being built into prosthetic hands, and computer vision methods will need to assess this hand-view visual evidence in order to estimate human intent. Specifically, paired images from human eye-view and hand-view of various objects placed at different orientations have been captured at the initial state of grasping trials, followed by paired video, EMG and IMU from the arm of the human during a grasp, lift, put-down, and retract style trial structure. For each trial, based on eye-view images of the scene showing the hand and object on a table, multiple humans were asked to sort in decreasing order of preference, five grasp types appropriate for the object in its given configuration relative to the hand. The potential utility of paired eye-view and hand-view images was illustrated by training a convolutional neural network to process hand-view images in order to predict eye-view labels assigned by humans.
机译:上肢和手工功能对于许多日常生活活动至关重要,并且截肢可能导致个人的重要功能损失。从这个角度来看,预计未来的先进假肢手会受益于机器人手和人类用户之间的改进的共同控制,但更重要的是从改进的人类意图从多模式传感器数据推断出来提供关于的机器人的感知能力运营背景。这种多峰传感器数据可以包括包括视觉的各种环境传感器,以及包括肌电图和惯性测量单元的人生理学和行为传感器。环境国家和人类意图估计的融合方法可以结合这些证据来源,以帮助假肢动作规划和控制。在本文中,我们展示了这种类型的数据集,这些类型被收集的是建立在假肢手中的摄像机的预期,并且计算机视觉方法需要评估这种手视觉证据以估计人类意图。具体地,已经在抓取试验的初始状态下捕获来自人眼视图和置于不同取向以不同取向的各种物体的手手视图,然后在掌握期间与人类的手臂配对的视频,EMG和IMU进行配对,放下和缩回样式试验结构。对于每次试验,基于显示表格和对象的场景的视图图像,要求多个人类按照偏好顺序排序,其适合于其给定配置的对象的五种掌握类型。通过训练卷积神经网络来处理手动视图图像以便预测人类分配的眼睛视图标签来说明成对视图和手动视图的潜在效用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号