【24h】

Humanoid Robots as Interviewers for Automated Credibility Assessment

机译:人形机器人作为自动可信度评估的访调员

获取原文

摘要

Humans are poor at detecting deception under the best conditions. The need for having a decision support system that can be a baseline for data-driven decision making is obvious. Such a system is not biased like humans are, and these often subconscious human biases can impair people's judgment. A system for helping people at border security (CBP) is the AVATAR. The AVATAR, an Embodied Conversational agent (ECA), is implemented as a self-service kiosk. Our research uses this AVATAR as the baseline and we plan to augment the automated credibility assessment task that the AVATAR performs using a Humanoid robot. We will be taking advantage of humanoid robots' capability of realistic dialogue and nonverbal gesturing. We are also capturing data from various sensors like microphones, cameras and an eye tracker that will help in model building and testing for the task of deception detection. We plan to carry out an experiment where we compare the results of an interview with the AVATAR and an interview with a humanoid robot. Such a comparative analysis has never been done before, hence we are very eager to conduct such a social experiment. This research paper deals with the design and implementation plan for such an experiment. We also want to highlight what the considerations are while designing such a social experiment. It will help us understand how people perceive robot agent interactions in contrast to the more traditional ECA agents on screen. For example, does the physical presence of a robot encourage greater perceptions of likability, expertise, or dominance? Moreover, this research will address the question on which interaction model (ECA or robot) elicits the most diagnostic cues to detecting deception. This study may also prove very useful to researchers and organizations that want to use robots in increasing social roles and need to understand its societal and personal implications.
机译:人类在最佳条件下检测欺骗的能力很差。很明显,需要有一个决策支持系统,该系统可以作为数据驱动型决策的基准。这样的系统不会像人类那样有偏见,而这些通常是潜意识的人类偏见会损害人们的判断力。 AVATAR是帮助边境安全局(CBP)的人员的系统。 AVATAR(一种具体化的对话代理(ECA))被实现为自助服务亭。我们的研究使用此AVATAR作为基准,并且我们计划扩大AVATAR使用类人机器人执行的自动可信度评估任务。我们将利用人形机器人的逼真的对话和非语言手势的能力。我们还从各种传感器(例如麦克风,摄像机和眼动仪)捕获数据,这些数据将有助于模型构建和测试欺骗检测任务。我们计划进行一项实验,比较AVATAR的采访结果和类人机器人的采访结果。这样的比较分析从未进行过,因此我们非常渴望进行这样的社会实验。本研究论文针对此类实验的设计和实施计划。我们还想强调设计此类社会实验时要考虑的因素。与屏幕上更传统的ECA代理相比,它将帮助我们了解人们如何看待机器人代理交互。例如,机器人的物理存在是否会激发人们对友善感,专业知识或主导地位的更大认识?此外,这项研究将解决以下问题:哪种交互模型(ECA或机器人)会为诊断欺骗提供最多的诊断线索。对于希望使用机器人提高社会角色并需要了解机器人的社会和个人意义的研究人员和组织,该研究也可能会非常有用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号