...
首页> 外文期刊>International Journal of Social Robotics >Human–Robot Facial Expression Reciprocal Interaction Platform: Case Studies on Children with Autism
【24h】

Human–Robot Facial Expression Reciprocal Interaction Platform: Case Studies on Children with Autism

机译:人体机器人面部表情互动互动平台:自闭症儿童的案例研究

获取原文
获取原文并翻译 | 示例
           

摘要

Abstract Reciprocal interaction and facial expression are some of the most interesting topics in the fields of social and cognitive robotics. On the other hand, children with autism show a particular interest toward robots, and facial expression recognition can improve these children’s social interaction abilities in real life. In this research, a robotic platform has been developed for reciprocal interaction consisting of two main phases, namely as Non-structured and Structured interaction modes. In the Non-structured interaction mode, a vision system recognizes the facial expressions of the user through a fuzzy clustering method. The interaction decision-making unit is combined with a fuzzy finite state machine to improve the quality of human–robot interaction by utilizing the results obtained from the facial expression analysis. In the Structured interaction mode, a set of imitation scenarios with eight different posed facial behaviors were designed for the robot. As a pilot study, the effect and acceptability of our platform have been investigated on autistic children between 3 and 7?years old and the preliminary acceptance rate of $$sim $$ ~ ?78% is observed in our experimental conditions. The scenarios start with simple facial expressions and get more complicated as they continue. The same vision system and fuzzy clustering method of the Non-structured interaction mode are used for automatic evaluation of a participant’s gestures. Lastly, the automatic assessment of imitation quality was compared with the manual video coding results. The Pearson’s r on these equivalent grades were computed as $$hbox {r},=,0.89$$ r = 0.89 which shows a sufficient agreement on the automatic and manual scores.
机译:摘要互动互动和面部表情是社会和认知机器人领域的一些最有趣的主题。另一方面,患有自闭症的儿童对机器人的特殊兴趣表现出特殊的兴趣,并且面部表情识别可以提高这些儿童在现实生活中的社会互动能力。在本研究中,已经开发了一种机器人平台,用于由两个主要相组成的互殖相互作用,即作为非结构化和结构化的相互作用模式。在非结构化交互模式中,视觉系统通过模糊聚类方法识别用户的面部表达式。相互作用决策单元与模糊有限状态机相结合,以通过利用从面部表达分析获得的结果来提高人机相互作用的质量。在结构化交互模式中,为机器人设计了一组具有八种不同构成面部行为的仿真场景。作为试点研究,我们平台的效果和可接受已经研究了3到7的自闭症儿童?历史,在我们的实验条件下观察到$$ sim $$〜78%的初步验收率。方案从简单的面部表情开始,并随着他们的持续变得更加复杂。非结构化交互模式的相同视觉系统和模糊聚类方法用于自动评估参与者的手势。最后,将模仿质量的自动评估与手动视频编码结果进行了比较。这些等效等级上的Pearson的R计算为$$ hbox {r} ,= ,0.89 $$ r = 0.89,它显示了自动和手动分数的充分协议。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号