首页> 外文学位 >Learning Joint Actions in Human-Human Interactions.
【24h】

Learning Joint Actions in Human-Human Interactions.

机译:学习人与人互动中的联合行动。

获取原文
获取原文并翻译 | 示例

摘要

Understanding human-human interactions during the performance of joint motor tasks is critical for developing rehabilitation robots that could aid therapists in providing effective treatments for motor problems. However, there is a lack of understanding of strategies (cooperative or competitive) adopted by humans when interacting with other individuals. Previous studies have investigated the cues (auditory, visual and haptic) that support these interactions but understanding how these unconscious interactions happen even without those cues is yet to be explained. To address this issue, in this study, a paradigm that tests the parallel efforts of pairs of individuals (dyads) to complete a jointly performed virtual reaching task, without any auditory or visual information exchange was employed. Motion was tracked with a NDI OptoTrak 3D motion tracking system that captured each subject's movement kinematics, through which we could measure the level of synchronization between two subjects in space and time. For the spatial analyses, the movement amplitudes and direction errors at peak velocities and at endpoints were analyzed. Significant differences in the movement amplitudes were found for subjects in 4 out of 6 dyads which were expected due to the lack of feedback between the subjects. Interestingly, subjects in this study also planned their movements in different directions in order to counteract the visuomotor rotation offered in the test blocks, which suggests the difference in strategies for the subjects in each dyad. Also, the level of de-adaptation in the control blocks in which no visuomotor rotation was offered to the subjects was measured. To further validate the results obtained through spatial analyses, a temporal analyses was done in which the movement times for the two subjects were compared. With the help of these results, numerous interaction scenarios that are possible in the human joint actions in without feedback were analyzed.
机译:了解关节运动任务执行过程中的人与人之间的相互作用对于开发康复机器人至关重要,该机器人可以帮助治疗师为运动问题提供有效的治疗。但是,人们与他人互动时缺乏对人类采用的策略(合作或竞争)的理解。先前的研究已经调查了支持这些交互作用的提示(听觉,视觉和触觉),但是,即使没有这些提示,也无法理解这些无意识交互如何发生。为了解决这个问题,在本研究中,采用了一种模式,该模式测试成对的个人(双胞胎)在完成共同执行的虚拟到达任务时的平行努力,而无需进行任何听觉或视觉信息交换。使用NDI OptoTrak 3D运动跟踪系统跟踪运动,该系统捕获了每个对象的运动运动学,通过它我们可以测量两个对象在空间和时间上的同步水平。对于空间分析,分析了峰值速度和终点处的运动幅度和方向误差。由于受试者之间缺乏反馈,因此预期在6个二分位数中有4个的受试者运动幅度存在显着差异。有趣的是,本研究中的受试者还计划了他们在不同方向上的运动,以抵消测试块中提供的粘性运动的旋转,这表明每个双体中受试者的策略有所不同。而且,测量了没有向受试者提供运动运动的对照块中的反适应水平。为了进一步验证通过空间分析获得的结果,进行了时间分析,其中比较了两个对象的运动时间。借助这些结果,分析了在没有反馈的情况下人为共同动作中可能发生的多种交互情况。

著录项

  • 作者

    Agrawal, Ankit.;

  • 作者单位

    Arizona State University.;

  • 授予单位 Arizona State University.;
  • 学科 Biomedical engineering.;Behavioral psychology.;Behavioral sciences.
  • 学位 M.S.
  • 年度 2016
  • 页码 73 p.
  • 总页数 73
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号