Reading partners' actions correctly is essential for successful coordination, but interpretation does not always reflect reality. Attribution biases, such as self-serving and correspondence biases, lead people to misinterpret their partners' actions and falsely assign blame after a surprise, or unexpected event. These biases further influence people's trust in their partners, including machine partners (Muir, 1987; Madhavan & Wiegmann, 2004). Advances in robotics have allowed for robots to partner with people at work and be treated socially (Young. Hawkins, Sharlin & Igarashi, 2009). However, these advances may interfere with a person's appropriate calibration of trust in robots (Parasuraman & Miller, 2004). A better understanding of attribution biases in the wake of an unexpected event may shed light on how trust develops in a robot partner. This study was built on a human coordination example to serve as a reference for future human-robot interactions. We posit that attribution biases lead people to blame their partner after experiencing a negative performance outcome, thus lowering their trust in the partner.
展开▼