首页> 外文会议>International Conference on HCI in Business, Government, and Organizations >Humanoid Robots as Interviewers for Automated Credibility Assessment
【24h】

Humanoid Robots as Interviewers for Automated Credibility Assessment

机译:人形机器人作为采访者自动信誉评估

获取原文

摘要

Humans are poor at detecting deception under the best conditions. The need for having a decision support system that can be a baseline for data-driven decision making is obvious. Such a system is not biased like humans are, and these often subconscious human biases can impair people's judgment. A system for helping people at border security (CBP) is the AVATAR. The AVATAR, an Embodied Conversational agent (ECA), is implemented as a self-service kiosk. Our research uses this AVATAR as the baseline and we plan to augment the automated credibility assessment task that the AVATAR performs using a Humanoid robot. We will be taking advantage of humanoid robots' capability of realistic dialogue and nonverbal gesturing. We are also capturing data from various sensors like microphones, cameras and an eye tracker that will help in model building and testing for the task of deception detection. We plan to carry out an experiment where we compare the results of an interview with the AVATAR and an interview with a humanoid robot. Such a comparative analysis has never been done before, hence we are very eager to conduct such a social experiment. This research paper deals with the design and implementation plan for such an experiment. We also want to highlight what the considerations are while designing such a social experiment. It will help us understand how people perceive robot agent interactions in contrast to the more traditional ECA agents on screen. For example, does the physical presence of a robot encourage greater perceptions of likability, expertise, or dominance? Moreover, this research will address the question on which interaction model (ECA or robot) elicits the most diagnostic cues to detecting deception. This study may also prove very useful to researchers and organizations that want to use robots in increasing social roles and need to understand its societal and personal implications.
机译:人类是在最好的条件下检测欺骗可怜。需要有一个决策支持系统,可为数据驱动的决策制定的基准是显而易见的。这样的系统不偏像人类,而这些往往下意识人的偏见会影响人们的判断。为帮助人们在边境安全系统(CBP)是动漫形象。化身,一个体现会话代理(ECA),被实现为自助服务亭。我们的研究使用该AVATAR作为基准,我们计划扩充,使用一个类人机器人的AVATAR进行自动评估的可信度任务。我们将利用人形机器人现实的对话和非言语手势的能力。我们还捕捉来自各种传感器,如麦克风,摄像机和眼睛跟踪器将在模型构建和测试用于欺骗检测的任务帮助数据。我们计划进行,我们比较的AVATAR的采访,并与人形机器人的面试结果的实验​​。这样的对比分析,从来没有做过,所以我们非常渴望进行这样的社会实验。该研究论文与这样的实验设计和实施计划的交易。我们也想强调一下的考虑而设计这样的社会实验。这将有助于我们了解人们如何看待相反,在屏幕上更传统的ECA代理机器人代理的相互作用。例如,没有一个机器人的物理存在鼓励好感度,专业知识,或显性的更大的看法?此外,该研究将涉及在其上相互作用模型(ECA或机器人)引起的最诊断提示来检测欺骗的问题。这项研究也可能被证明是要在提高社会角色和需要了解其社会和个人影响使用的机器人研究人员和组织是非常有用的。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号