Overtrusting robots: Setting a research agenda to mitigate overtrust in automation
Sætra, Henrik Skaug; Aroyo, Alexander M.; Bruyne, Jan de; Dheu, Orian; Fosch-Villaronga, Eduard; Gudkov, Aleksei; Hoch, Holly; Lutz, Christoph; Solberg, Mads; Tamò-Larrieux, Aurelia
Journal article, Peer reviewed
Published version
Date
2021Metadata
Show full item recordCollections
- Scientific articles [2221]
Original version
Paladyn, Journal of Behavioral Robotics, 2021 Volume 12 Issue 1 10.1515/pjbr-2021-0029Abstract
There is increasing attention given to the concept of trustworthiness for artificial intelligence and robotics. However, trust is highly context-dependent, varies among cultures, and requires reflection on others’ trustworthiness, appraising whether there is enough evidence to conclude that these agents deserve to be trusted. Moreover, little research exists on what happens when too much trust is placed in robots and autonomous systems. Conceptual clarity and a shared framework for approaching overtrust are missing. In this contribution, we offer an overview of pressing topics in the context of overtrust and robots and autonomous systems. Our review mobilizes insights solicited from in-depth conversations from a multidisciplinary workshop on the subject of trust in human–robot interaction (HRI), held at a leading robotics conference in 2020. A broad range of participants brought in their expertise, allowing the formulation of a forward-looking research agenda on overtrust and automation biases in robotics and autonomous systems. Key points include the need for multidisciplinary understandings that are situated in an eco-system perspective, the consideration of adjacent concepts such as deception and anthropomorphization, a connection to ongoing legal discussions through the topic of liability, and a socially embedded understanding of overtrust in education and literacy matters. The article integrates diverse literature and provides a ground for common understanding for overtrust in the context of HRI.