首页> 外文期刊>Journal of robotics and mechatronics >Abstraction Multimodal Low-Dimensional Representation from High-Dimensional Posture Information and Visual Images
【24h】

Abstraction Multimodal Low-Dimensional Representation from High-Dimensional Posture Information and Visual Images

机译:高维姿态信息和视觉图像的抽象多模态低维表示

获取原文
获取原文并翻译 | 示例
           

摘要

Imitative learning is an effective method for robots to obtain a novel movement from a person demonstrating many kinds of movement. Many problems need to be solved, however, before a robot can achieve imitative learning. One problem is how to convert visual information on the demonstrator's motion to kinematic posture information for the learner. This is referred to as a correspondence problem and we have focused on this problem in this study. To solve it, we focus on the formation of a low-dimensional representation that integrates sensory information from two different modalities. We propose a computation method for constructing the low-dimensional representation combining posture information and visual images by using Kernel Canonical Correlation Analysis (KCCA). Using this method, a robot becomes able to estimate posture information from visual images in a bottom-up way. Using several experiments we show how effective our proposed method is in estimating kinematic information.
机译:模仿学习是机器人从展示多种动作的人那里获得新颖动作的有效方法。但是,在机器人实现模仿学习之前,需要解决许多问题。一个问题是如何将演示者动作的视觉信息转换为学习者的运动姿势信息。这被称为对应问题,在本研究中我们集中于此问题。为了解决这个问题,我们专注于低维表示的形成,该低维表示整合了来自两种不同模式的感官信息。我们提出了一种计算方法,该方法通过使用核规范相关分析(KCCA)构造结合了姿势信息和视觉图像的低维表示。使用此方法,机器人将能够以自下而上的方式从视觉图像估计姿势信息。通过几次实验,我们证明了我们提出的方法在估计运动学信息方面的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号