首页> 外文期刊>Cyberpsychology, behavior and social networking >Do We Take a Robot's Needs into Account? The Effect of Humanizatlon on Prosocial Considerations Toward Other Human Beings and Robots
【24h】

Do We Take a Robot's Needs into Account? The Effect of Humanizatlon on Prosocial Considerations Toward Other Human Beings and Robots

机译:我们是否考虑到机器人的需求? 人性素对其他人类和机器人的女性考虑因素的影响

获取原文
获取原文并翻译 | 示例
       

摘要

Robots are becoming an integral part of society, yet the extent to which we are prosocial toward these nonliving objects is unclear. While previous research shows that we tend to take care of robots in high-risk, high-consequence situations, this has not been investigated in more day-to-day, low-consequence situations. Thus, we utilized an experimental paradigm (the Social Mindfulness “SoMi” paradigm) that involved a trade-off between participants' own interests and their willingness to take their task partner's needs into account. In two experiments, we investigated whether participants would take the needs of a robotic task partner into account to the same extent as when the task partner was a human (Study I), and whether this was modulated by participant's anthropomorphic attributions to said robot (Study II). In Study I, participants were presented with a social decision-making task, which they performed once by themselves (solo context) and once with a task partner (either a human or a robot). Subsequently, in Study II, participants performed the same task, but this time with both a human and a robotic task partner. The task partners were introduced via neutral or anthropomorphic priming stories. Results indicate that the effect of humanizing a task partner indeed increases our tendency to take someone else's needs into account in a social decision-making task. However, this effect was only found for a human task partner, not for a robot. Thus, while anthropomorphizing a robot may lead us to save it when it is about to perish, it does not make us more socially considerate of it in day-to-day situations.
机译:机器人正在成为社会的一个组成部分,但我们对这些非生物对象的主利的程度尚不清楚。虽然以前的研究表明,我们倾向于在高风险,高后果情况下照顾机器人,但这尚未在日常生活中进行调查。因此,我们利用了一个实验范式(社会思想“SOMI”PARADIGM),参与参与者自身利益之间的权衡以及他们愿意考虑到任务合作伙伴的需要。在两个实验中,我们调查了参与者是否会在与任务合作伙伴是人类(研究I)的情​​况下考虑相同的程度,以及是否被参与者的人类拟计归属于所述机器人调制(研究II)。在研究I中,参与者呈现出一个社会决策任务,他们自己(独奏上下文)和一次与任务合作伙伴(人或机器人)进行一次。随后,在研究II中,参与者进行了相同的任务,但这一次与人类和机器人任务合作伙伴。任务合作伙伴是通过中立或拟人灌注故事引入的。结果表明,人性化任务伙伴的效果确实增加了我们在社会决策任务中考虑到别人的需求的倾向。但是,这种效果仅针对人类任务合作伙伴找到,而不是机器人。因此,虽然人为机器人可能导致我们在日常情况下使我们能够将其保存,但在日常情况下,它不会使我们更加社会考虑。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号