首页> 外文期刊>IEEE transactions on automation science and engineering >Cross-Modal Material Perception for Novel Objects: A Deep Adversarial Learning Method
【24h】

Cross-Modal Material Perception for Novel Objects: A Deep Adversarial Learning Method

机译:新型物体的跨模态材料感知:一种深层的对抗学习方法

获取原文
获取原文并翻译 | 示例

摘要

To more actively perform fine manipulation tasks in the real world, intelligent robots should be able to understand and communicate the physical attributes of the material during interaction with an object. Tactile and vision are two important sensing modalities in robotic perception system. In this article, we propose a cross-modal material perception framework for recognizing novel objects. Concretely, it first adopts an object-agnostic method to associate information from tactile and visual modalities. It then recognizes a novel object by using its tactile signal to retrieve perceptually similar surface material images through the learned cross-modal correlation. This problem exhibits a challenge because data from visual and tactile modalities are highly heterogeneous and weakly paired. Moreover, the framework should not only consider cross-modal pairwise relevance but also be discriminative and generalized for unseen objects. To this end, we propose a weakly paired cross-modal adversarial learning (WCMAL) model for the visual-tactile cross-modal retrieval, which combines the advantages of deep learning and adversarial learning. In particular, the model fully considers the weak pairing problem between the two modalities. Finally, we conduct verification experiments on a publicly available data set. The results demonstrate the effectiveness of the proposed method. Note to Practitioners-Since cross-modal perception can improve the active operation of automation systems, it is invaluable for industrial intelligence, particularly when only one sensing modality cannot be used or suitable in some applications. In this article, we provide a framework of cross-modal material perception for object recognition using the idea of the cross-modal retrieval. Concretely, we use relevant tactile data of an unknown object to retrieve perceptually similar surface images, which are used to evaluate its material properties. Different from that previous works using tactile information as a complement or alternative to visual information to recognize specific objects, our proposed framework is able to estimate and infer material properties of both seen and unseen objects, which can enhance manipulation systems intelligence and improve the quality of the interaction. In our future works, more modality information will be incorporated to further enhance the cross-modal material perception.
机译:为了更积极地在现实世界中执行微量操作任务,智能机器人应该能够在与对象的交互期间理解和传达材料的物理属性。触觉和愿景是机器人感知系统中的两个重要传感方式。在本文中,我们提出了一种跨模型材料感知框架,用于识别新颖的物体。具体地,它首先采用对象无关的方法来将信息与触觉和视觉方式相关联。然后,通过使用其触觉信号来识别新的对象,以通过学习的跨模型相关来检索感知相似的表面材料图像。此问题呈现出挑战,因为来自视觉和触觉方式的数据具有高度异质和弱配对。此外,该框架不仅应考虑跨模型成对相关性,而且不仅可以针对未经对象的歧视和广义。为此,我们为视觉触觉交叉模态检索提出了一种弱配对的跨模型对抗学习(WCMAL)模型,其结合了深度学习和对抗学习的优势。特别是,该模型充分考虑了两种方式之间的弱配对问题。最后,我们对公开的数据集进行验证实验。结果证明了该方法的有效性。注意事项 - 自从跨模型感知可以改善自动化系统的主动操作,因此对于工业智能而言,特别是当在某些应用中只能使用或适合某种感测模式时,这是非常宝贵的。在本文中,我们提供了使用跨模型检索的思想来提供对对象识别的跨模型材料感知的框架。具体地,我们使用未知对象的相关触觉数据来检索感知相似的表面图像,用于评估其材料特性。与以前的作品不同,使用触觉信息作为识别特定对象的可视信息的补充或替代,我们所提出的框架能够估计和推断出看和看不见的物体的材料属性,这可以增强操纵系统智能,提高质量互动。在我们未来的作品中,将纳入更多的模态信息,以进一步增强跨莫代尔材料感知。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号