首页> 外文会议>2011 IEEE/RSJ International Conference on Intelligent Robots and Systems >Learning tactile characterizations of object- and pose-specific grasps
【24h】

Learning tactile characterizations of object- and pose-specific grasps

机译:学习针对特定对象和特定姿势的触觉表征

获取原文

摘要

Our aim is to predict the stability of a grasp from the perceptions available to a robot before attempting to lift up and transport an object. The percepts we consider consist of the tactile imprints and the object-gripper configuration read before and until the robot's manipulator is fully closed around an object. Our robot is equipped with multiple tactile sensing arrays and it is able to track the pose of an object during the application of a grasp. We present a kernel-logistic-regression model of pose- and touch-conditional grasp success probability which we train on grasp data collected by letting the robot experience the effect on tactile and visual signals of grasps suggested by a teacher, and letting the robot verify which grasps can be used to rigidly control the object. We consider models defined on several subspaces of our input data - e.g., using tactile perceptions or pose information only. Our experiment demonstrates that joint tactile and pose-based perceptions carry valuable grasp-related information, as models trained on both hand poses and tactile parameters perform better than the models trained exclusively on one perceptual input.
机译:我们的目标是在尝试举起和运输物体之前,根据对机器人可用的感知来预测抓握的稳定性。我们考虑的感知包括触觉印记和在机械手完全关闭物体之前和之后所读取的物体夹持器配置。我们的机器人配备了多个触觉感应阵列,能够在抓握过程中跟踪物体的姿势。我们提出了一种姿势和触摸条件的抓握成功概率的核对逻辑回归模型,该模型通过让机器人体验老师建议的抓握对触觉和视觉信号的影响,并让机器人进行验证,从而对收集到的抓握数据进行训练抓地力可用于严格控制对象。我们考虑在输入数据的几个子空间上定义的模型-例如,仅使用触觉感知或姿势信息。我们的实验表明,基于触觉和基于姿势的联合感知具有有价值的与抓握相关的信息,因为在手部姿势和触觉参数上训练的模型比仅在一种感知输入上训练的模型表现更好。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号