首页> 美国卫生研究院文献>The Journal of Neuroscience >Dynamic Sound Localization during Rapid Eye-Head Gaze Shifts
【2h】

Dynamic Sound Localization during Rapid Eye-Head Gaze Shifts

机译:快速的眼头注视移动过程中的动态声音定位

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Human sound localization relies on implicit head-centered acoustic cues. However, to create a stable and accurate representation of sounds despite intervening head movements, the acoustic input should be continuously combined with feedback signals about changes in head orientation. Alternatively, the auditory target coordinates could be updated in advance by using either the preprogrammed gaze-motor command or the sensory target coordinates to which the intervening gaze shift is made (“predictive remapping”). So far, previous experiments cannot dissociate these alternatives. Here, we study whether the auditory system compensates for ongoing saccadic eye and head movements in two dimensions that occur during target presentation. In this case, the system has to deal with dynamic changes of the acoustic cues as well as with rapid changes in relative eye and head orientation that cannot be preprogrammed by the audiomotor system. We performed visual-auditory double-step experiments in two dimensions in which a brief sound burst was presented while subjects made a saccadic eye-head gaze shift toward a previously flashed visual target. Our results show that localization responses under these dynamic conditions remain accurate. Multiple linear regression analysis revealed that the intervening eye and head movements are fully accounted for. Moreover, elevation response components were more accurate for longer-duration sounds (50 msec) than for extremely brief sounds (3 msec), for all localization conditions. Taken together, these results cannot be explained by a predictive remapping scheme. Rather, we conclude that the human auditory system adequately processes dynamically varying acoustic cues that result from self-initiated rapid head movements to construct a stable representation of the target in world coordinates. This signal is subsequently used to program accurate eye and head localization responses.
机译:人的声音定位依赖于隐式的以头为中心的声音提示。但是,为了在尽管发生头部移动的情况下仍能创建稳定,准确的声音表示,应将声音输入与有关头部方向变化的反馈信号连续结合。可替代地,可以通过使用预编程的凝视运动命令或进行了中间凝视移位的感觉目标坐标来预先更新听觉目标坐标(“预测性重新映射”)。到目前为止,以前的实验无法解开这些替代方案。在这里,我们研究听觉系统是否在目标呈现过程中补偿了二维的持续眼跳和眼球运动。在这种情况下,系统必须处理声音提示的动态变化以及相对的眼睛和头部方向的快速变化,而这是音频马达系统无法预先编程的。我们在两个方向上执行了听觉双步实验,其中出现了短暂的声音爆发,而受试者的眼球视线朝先前闪烁的视觉目标转移。我们的结果表明,在这些动态条件下的定位响应仍然准确。多元线性回归分析表明,干预了眼睛和头部的运动。此外,在所有定位条件下,持续时间较长的声音(50毫秒)的高程响应分量比极短暂的声音(3毫秒)的精度更高。综上所述,这些结果无法用预测性重新映射方案来解释。相反,我们得出的结论是,人类听觉系统充分处理了动态变化的声音提示,这些声音提示是由自发的快速头部运动引起的,从而可以在世界坐标中构建目标的稳定表示。该信号随后用于编程准确的眼睛和头部定位响应。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号