首页> 外文OA文献 >The Contribution of Visual Somatosensory Input to Target Localization During the Performance of a Precision Grasping Placement Task
【2h】

The Contribution of Visual Somatosensory Input to Target Localization During the Performance of a Precision Grasping Placement Task

机译:在执行精确的抓取和放置任务期间,视觉和体感输入对目标定位的贡献

摘要

Objective: Binocular vision provides the most accurate and precise depth information; however, many people have impairments in binocular visual function. It is currently unknown whether depth information from another modality can improve depth perception during action planning and execution. Therefore, the goal of this thesis was to assess whether somatosensory input improves target localization during the performance of a precision placement task. It was hypothesized that somatosensory input regarding target location will improve task performance.Methods: Thirty visually normal participants performed a bead-threading task with their right hand during binocular and monocular viewing. Upper limb kinematics and eye movements were recorded using the Optotrak and EyeLink 2 while participants picked up the beads and placed them on a vertical needle. In study 1, somatosensory and visual feedback provided input about needle location (i.e., participants could see their left hand holding the needle). In study 2, only somatosensory feedback was provided (i.e., view of the left hand holding the needle was blocked, and practice trials were standardized). The main outcome variables that were examined were placement time, peak acceleration, and mean position and variability of the limb along the trajectory. A repeated analysis of variance with 2 factors, Viewing Condition (binocular/left eye monocular/right eye monocular) and Modality (vision/somatosensory) was used to test the hypothesis.Results: Results from study 1 were in accordance with our hypothesis, showing a significant interaction between viewing condition and modality for placement time (p=0.0222). Specifically, when somatosensory feedback was provided, placement time was >150 ms shorter in both monocular viewing conditions compared to the vision only condition. In contrast, somatosensory feedback did not significantly affect placement time during binocular viewing. There was no evidence to support that motor planning was improved when somatosensory input about end target location was provided. Limb trajectory showed a deviation toward needle location along azimuth at various kinematic markers during movement execution when somatosensory feedback was provided. Results from study 2 showed a main effect of modality for placement time (p=0.0288); however, the interaction between modality and vision was not significant. The results also showed that somatosensory input was associated with faster movement times and higher peak accelerations. Similar to study one, limb trajectory showed a deviation toward needle location at various kinematic markers during movement execution when somatosensory feedback was provided.Conclusions: This study demonstrated that information from another modality can improve planning and execution of reaching movements under certain conditions. It may be that the role of somatosensory input is not as effective when practice is not administered. It is important to note that despite the improved performance when somatosensory input was provided, performance did not reach the same level as was found during binocular viewing. These findings provide new knowledge about multisensory integration during the performance of a high precision manual task, and this information can be useful when designing new training regimens for people with abnormal binocular vision.
机译:目的:双目视觉提供最准确,最精确的深度信息;但是,许多人的双眼视觉功能受损。目前尚不清楚来自另一种方式的深度信息是否可以改善行动计划和执行过程中的深度感知。因此,本论文的目的是评估在执行精确放置任务期间体感输入是否可以改善目标定位。假设关于目标位置的体感输入将改善任务性能。方法:30名视觉正常参与者在双眼和单眼观察期间用右手执行了穿珠任务。参与者使用Optotrak和EyeLink 2记录上肢运动学和眼球运动,而参与者则捡起珠子并将它们放在垂直的针上。在研究1中,体感和视觉反馈提供了有关针头位置的输入(即,参与者可以看到他们的左手握住了针头)。在研究2中,仅提供了体感反馈(即,左手握住针头的视图被遮挡,并且对临床试验进行了标准化)。检查的主要结果变量是放置时间,峰值加速度以及肢体沿轨迹的平均位置和变异性。对观看条件(双眼/左眼单眼/右眼单眼)和模态(视觉/体感)两个因素的方差进行了重复分析,以检验该假设。结果:研究1的结果与我们的假设一致,表明观看条件与放置时间的模态之间存在显着的交互作用(p = 0.0222)。具体而言,当提供体感反馈时,与仅视觉条件相比,两种单眼观察条件下的放置时间都短于150 ms。相反,在双眼观察期间,体感反馈并未显着影响放置时间。没有证据支持在提供有关最终目标位置的体感输入时改善了运动计划。当提供体感反馈时,在运动执行期间,肢体轨迹显示在各种运动学标记处沿着方位角朝着针的位置偏离。研究2的结果显示了情态对放置时间的主要影响(p = 0.0288);但是,情态与视觉之间的相互作用并不显着。结果还表明,体感输入与更快的运动时间和更高的峰值加速度相关。与研究之一相似,在提供体感反馈的情况下,肢体轨迹显示运动执行过程中各个运动学标记处的针位置偏离。结论:这项研究表明,来自另一种方式的信息可以在一定条件下改善计划和执行运动的能力。当不进行练习时,体感输入的作用可能不那么有效。重要的是要注意,尽管在提供体感输入时性能有所提高,但性能却未达到双目观察时的水平。这些发现提供了在执行高精度手动任务过程中有关多传感器集成的新知识,并且该信息在为双眼视力异常的人设计新的训练方案时可能有用。

著录项

  • 作者

    Tugac Naime;

  • 作者单位
  • 年度 2017
  • 总页数
  • 原文格式 PDF
  • 正文语种 en
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号