首页> 外文OA文献 >Sensory integration model inspired by the superior colliculus for multimodal stimuli localization
【2h】

Sensory integration model inspired by the superior colliculus for multimodal stimuli localization

机译:感觉整合模型的灵感来自上丘的多模式刺激定位

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Sensory information processing is an important feature of robotic agents that must interact with humans or the environment. For example, numerous attempts have been made to develop robots that have the capability of performing interactive communication. In most cases, individual sensory information is processed and based on this, an output action is performed. In many robotic applications, visual and audio sensors are used to emulate human-like communication. The Superior Colliculus, located in the mid-brain region of the nervous system, carries out similar functionality of audio and visual stimuli integration in both humans and animals. In recent years numerous researchers have attempted integration of sensory information using biological inspiration. A common focus lies in generating a single output state (i.e. a multimodal output) that can localize the source of the audio and visual stimuli. This research addresses the problem and attempts to find an effective solution by investigating various computational and biological mechanisms involved in the generation of multimodal output. A primary goal is to develop a biologically inspired computational architecture using artificial neural networks. The advantage of this approach is that it mimics the behaviour of the Superior Colliculus, which has the potential of enabling more effective human-like communication with robotic agents. The thesis describes the design and development of the architecture, which is constructed from artificial neural networks using radial basis functions. The primary inspiration for the architecture came from emulating the function top and deep layers of the Superior Colliculus, due to their visual and audio stimuli localization mechanisms, respectively. The integration experimental results have successfully demonstrated the key issues, including low-level multimodal stimuli localization, dimensionality reduction of audio and visual input-space without affecting stimuli strength, and stimuli localization with enhancement and depression phenomena. Comparisons have been made between computational and neural network based methods, and unimodal verses multimodal integrated outputs in order to determine the effectiveness of the approach.
机译:感官信息处理是必须与人类或环境互动的机器人代理的重要功能。例如,已经进行了许多尝试来开发具有执行交互式通信能力的机器人。在大多数情况下,将处理各个感官信息,并在此基础上执行输出操作。在许多机器人应用中,视觉和音频传感器用于模拟类似人的通信。位于神经系统中脑区域的上眼C囊在人类和动物中执行类似的音频和视觉刺激整合功能。近年来,许多研究人员已尝试利用生物学灵感来整​​合感官信息。共同的重点在于生成可以定位音频和视觉刺激源的单个输出状态(即多模式输出)。这项研究解决了这个问题,并试图通过调查参与多模式输出生成的各种计算和生物学机制来找到有效的解决方案。一个主要目标是使用人工神经网络开发具有生物启发性的计算架构。这种方法的优势在于,它模仿了上腔的行为,它具有与机器人特工进行更有效的类人交流的潜力。本文描述了该体系结构的设计和开发,该体系结构是使用径向基函数从人工神经网络构建的。该体系结构的主要灵感来自于分别模拟了Super Colliculus的功能顶层和深层的视觉和音频刺激定位机制。集成实验结果已经成功地证明了关键问题,包括低水平的多模式刺激定位,在不影响刺激强度的情况下降低音频和视觉输入空间的维数以及具有增强和抑制现象的刺激定位。比较了基于计算和基于神经网络的方法以及单峰与多峰集成输出之间的比较,以确定该方法的有效性。

著录项

  • 作者

    Ravulakollu Kiran Kumar;

  • 作者单位
  • 年度 2012
  • 总页数
  • 原文格式 PDF
  • 正文语种 English
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号