Regulating for Trust: Can Law Establish Trust in Artificial Intelligence?
Peer reviewed, Journal article
Published version
Date
2023Metadata
Show full item recordCollections
Original version
10.1111/rego.12568Abstract
The current political and regulatory discourse frequently references the term “trustworthy artificial intelligence (AI)”. In Europe, the attempts to ensure trustworthy AI started already with the High-Level Expert Group Ethics Guidelines for Trustworthy AI and have now merged into the regulatory discourse on the EU AI Act. Around the globe, policymakers are actively pursuing initiatives—as the US Executive Order on Safe, Secure, and Trustworthy AI, or the Bletchley Declaration on AI showcase—based on the premise that the right regulatory strategy can shape trust in AI. To analyze the validity of this premise, we propose to consider the broader literature on trust in automation. On this basis, we constructed a framework to analyze 16 factors that impact trust in AI and automation more broadly. We analyze the interplay between these factors and disentangle them to determine the impact regulation can have on each. The article thus provides policymakers and legal scholars with a foundation to gauge different regulatory strategies, notably by differentiating between those strategies where regulation is more likely to also influence trust on AI (e.g., regulating the types of tasks that AI may fulfill) and those where its influence on trust is more limited (e.g., measures that increase awareness of complacency and automation biases). Our analysis underscores the critical role of nuanced regulation in shaping the human-automation relationship and offers a targeted approach to policymakers to debate how to streamline regulatory efforts for future AI governance. Regulating for Trust: Can Law Establish Trust in Artificial Intelligence?