Overtrusting Robots: Setting a Research Agenda to Mitigate Overtrust in Automation
Journal article, Peer reviewed
MetadataShow full item record
- Scientific articles 
Original versionPaladyn - Journal of Behavioral Robotics. 2021, 12 (1), 423-436. 10.1515/pjbr-2021-0029
There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human-robot interaction, held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of human-robot interaction.