首页> 外文学位 >Studying Eye Gaze of Children with Autism Spectrum Disorders in Interaction with a Social Robot.
【24h】

Studying Eye Gaze of Children with Autism Spectrum Disorders in Interaction with a Social Robot.

机译:与社交机器人互动研究自闭症谱系障碍儿童的视线。

获取原文
获取原文并翻译 | 示例

摘要

Children with Autism Spectrum Disorders (ASDs) experience deficits in verbal and nonverbal communication skills including motor control, emotional facial expressions, and eye gaze attention. In this thesis, we focus on studying the feasibility and effectiveness of using a social robot, called NAO, at modeling and improving the social responses and behaviors of children with autism. In our investigation, we designed and developed two protocols to fulfill this mission. Since eye contact and gaze responses are important non-verbal cues in human's social communication and as the majority of individuals with ASD have difficulties regulating their gaze responses, in this thesis we have mostly focused on this area.;In Protocol 1 (eye gaze duration and shifting frequency are analyzed in this protocol), we designed two social games (i.e. NAO Spy and Find the Suspect) and recruited 21 subjects (i.e. 14 ASD and seven Typically Developing (TD) children) ages between 7-17 years old to interact with NAO. All sessions were recorded using cameras and the videos were used for analysis. In particular, we manually annotated the eye gaze direction of children (i.e. gaze averted `0' or gaze at robot `1') in every frame of the videos within two social contexts (i.e. child speaking and child listening). Gaze fixation and gaze shifting frequency are analyzed, where both patterns are significantly improved or changed (more than half of the participants increased the eye contact duration time and decrease the eye gaze shifting during both games). The results confirms that the TD group has more gaze fixation as they are listening (71%) than while they are speaking (37%). However there is no significant difference between the average gaze fixations of ASD group.;Besides using the statistical measures (i.e. gaze fixation and shifting), we statistically modeled the gaze responses of both groups (TD and ASD) using Markov models (e.g. Hidden Markov Model (HMM) and Variable-order Markov Model (VMM)). Using Markov based modeling allows us to analyze the sequence of gaze direction of ASD and TD groups for two social conversational sessions (Child Speaking and Listening). The results of our experiments show that for the `Child Speaking' segments, HMM can distinguish and recognize the differences of gaze patterns of TD and ASD groups accurately (79%). In addition, to evaluate the effect of history of eye gaze in the gaze responses, the VMM technique was employed to model the effects of different length of sequential data. The results of VMM demonstrate that, in general, the first order system (VMM with order D=1) can reliably represent the differences between the gaze patterns of TD and ASD group. Besides that, the experimental results confirm that VMM is more reliable and accurate for modeling the gaze responses of "Child Listening" sessions than the "Child Speaking" one.;Protocol 2 contains five sub-sessions targeted intervention of different social skills: verbal communication, joint attention, eye gaze attention, facial expressions recognition/imitation. The objective of this protocol is to provide intervention sessions based on the needs of children diagnosed with ASD. Therefore each participant attended in three times of baseline sessions for evaluate his/her existing social skill and behavioral response, when the study began. In this protocol the behavioral responses of every child is recorded in each intervention session where feedbacks are focused on improving their social skills if they lack one. For example if they are not good at recognizing facial expression, we give them feedback on how every facial expression looks like and ask them to recognize them correctly while we do not feedback on other social skills. Our experimental results show that customizing the human-robot interaction would improve the social skills of children with ASD.
机译:患有自闭症谱系障碍(ASD)的儿童在言语和非言语交流技巧(包括运动控制,情绪面部表情和视线注视)方面存在缺陷。在本文中,我们重点研究使用社交机器人NAO建模和改善自闭症儿童的社会反应和行为的可行性和有效性。在我们的调查中,我们设计并开发了两种协议来完成此任务。由于眼神交流和凝视反应是人类社交中重要的非语言线索,并且由于大多数自闭症患者难以调节其凝视反应,因此在本文中,我们主要关注这一领域。并在此协议中分析了移动频率),我们设计了两个社交游戏(例如NAO Spy和Find the Suspect),并招募了21个受试者(例如14个ASD和七个17至17岁的典型发展中(TD)儿童)进行互动与NAO。所有会话均使用相机记录,并将视频用于分析。特别是,我们在两个社交环境(即说儿童和听儿童)的每个视频帧中手动注释了孩子的视线方向(即,注视避开的“ 0”或注视机器人“ 1”的视线)。分析注视固定和注视频率,其中两种模式都得到显着改善或改变(超过一半的参与者增加了两次比赛中的眼神接触持续时间并减少了注视眼)。结果证实,TD组在听时比在讲话时(37%)有更多的凝视注视(71%)。然而,ASD组的平均注视力之间没有显着差异。;除了使用统计方法(即注视注视和移动),我们还使用马尔可夫模型(例如Hidden Markov)对两组(TD和ASD)的注视反应进行了统计学建模。模型(HMM)和可变阶马尔可夫模型(VMM))。使用基于马尔可夫的建模方法,我们可以分析两个社交会话(儿童口语和听力)中ASD和TD组的注视方向的顺序。我们的实验结果表明,对于“儿童口语”部分,HMM可以准确地区分和识别TD和ASD组的注视模式差异(79%)。另外,为了评估注视历史在注视响应中的影响,采用了VMM技术来模拟不同长度顺序数据的影响。 VMM的结果表明,通常,一阶系统(D阶为V = 1的VMM)可以可靠地表示TD和ASD组的注视模式之间的差异。除此之外,实验结果证实,VMM比“儿童口语”会话更能准确可靠地模拟“儿童听力”会话的注视响应。;协议2包含五个针对不同社交技能的针对性子会话:言语交际,关节注意,眼睛注视注意,面部表情识别/模仿。该协议的目的是根据诊断为ASD的儿童的需求提供干预会议。因此,在研究开始时,每位参与者都参加了三次基线会议,以评估他/她现有的社交技能和行为反应。在该协议中,每个孩子的行为反应都记录在每次干预环节中,如果他们缺乏反馈,反馈将集中在提高他们的社交技能上。例如,如果他们不擅长识别面部表情,我们会向他们提供有关每个面部表情的反馈,并要求他们正确识别它们,而我们不会就其他社交技能提供反馈。我们的实验结果表明,自定义人机交互将提高ASD儿童的社交能力。

著录项

  • 作者

    Feng, Huanghao.;

  • 作者单位

    University of Denver.;

  • 授予单位 University of Denver.;
  • 学科 Robotics.;Clinical psychology.;Social psychology.;Experimental psychology.;Behavioral psychology.
  • 学位 M.S.
  • 年度 2014
  • 页码 111 p.
  • 总页数 111
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

  • 入库时间 2022-08-17 11:54:01

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号