首页> 外文期刊>IEEE Robotics and Automation Letters >Modeling Grasp Motor Imagery Through Deep Conditional Generative Models
【24h】

Modeling Grasp Motor Imagery Through Deep Conditional Generative Models

机译:通过深度条件生成模型建模掌握的运动图像

获取原文
获取原文并翻译 | 示例

摘要

Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations and demonstrate its capacity for capturing and generating multimodal multifinger grasp configurations on a simulated grasping dataset.
机译:抓取是一个复杂的过程,涉及对象,周围环境和自己的知识。尽管人类能够集成和处理执行此任务所需的所有感官信息,但是为机器配备这种功能是一项极富挑战性的工作。在本文中,我们研究了深度学习技术如何使我们能够将诸如运动图像之类的高级概念转化为机器人抓地力合成问题。我们探索了基于生成模型的范例,用于学习集成的对象-动作表示,并展示了其在模拟抓取数据集上捕获和生成多模式多指抓取配置的能力。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号