首页> 外文会议>International Conference on Epigenetic Robotics >Multimodal Representation of Hand Grasping based on Deep Belief Nets
【24h】

Multimodal Representation of Hand Grasping based on Deep Belief Nets

机译:基于深度信仰网的手掌的多式联算

获取原文

摘要

In human brain, different sensor information is thought to be processed in different area and integrated in parietal area. Fig. 1 shows a model of neural mechanism for grasping proposed by Oztop et al (Oztop et al., 2006). As shown in this gore, the information of hand and object is processed separately and important features for grasping are extracted in the hierarchical network. In this paper, we aim to construct a hierarchical model for grasping like brain model. The hierarchical model is thought to be plausible as developmental model, because an infant learns its grasping skills gradually in the developing process (Case-Smith and Pehoski, 1992). For this purpose, we adopt deep belief network (DBN), proposed by Hinton, for representing the multimodal information in grasping, in which one modal information is self organized to extract statistical information of given data and different modal information are easily integrated in the hierarchical architecture (Hinton, 2007).
机译:在人类的大脑中,认为在不同的区域中被认为将不同的传感器信息进行处理,并在台廓区域中整合。图。图1示出了通过OZTOP等(oztop等,2006)提出的抓握抓握的神经机制模型。如该血腥所示,分别处理手和对象的信息,并在分层网络中提取抓取的重要特征。在本文中,我们的目的是构建一种掌握脑模型的分层模型。分层模型被认为是可言论的发育模式,因为婴儿在开发过程中逐步了解其掌握技能(Case-Smith和Pehoski,1992)。为此目的,我们采用亨顿提出的深度信仰网络(DBN),用于代表抓握中的多模式信息,其中一个模态信息是自我组织的,以提取给定数据的统计信息,并且在层次结构中容易集成不同的模态信息。建筑(2007年HINTON)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号