• norsk
    • English
  • English 
    • norsk
    • English
  • Login
View Item 
  •   Home
  • Handelshøyskolen BI
  • Publikasjoner fra CRIStin - BI
  • View Item
  •   Home
  • Handelshøyskolen BI
  • Publikasjoner fra CRIStin - BI
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Regulating for Trust: Can Law Establish Trust in Artificial Intelligence?

Tamò-Larrieux, Aurelia; Guitton, Clement; Mayer, Simon; Lutz, Christoph
Peer reviewed, Journal article
Published version
Thumbnail
View/Open
Tam%C3%B2%E2%80%90Larrieux%2C+Guitton%2C+Mayer%2C+Lutz+2023+%28Trust+AI%2C+RG%29.pdf (694.0Kb)
URI
https://hdl.handle.net/11250/3105964
Date
2023
Metadata
Show full item record
Collections
  • Publikasjoner fra CRIStin - BI [1196]
Original version
10.1111/rego.12568
Abstract
The current political and regulatory discourse frequently references the term “trustworthy artificial intelligence (AI)”. In Europe, the attempts to ensure trustworthy AI started already with the High-Level Expert Group Ethics Guidelines for Trustworthy AI and have now merged into the regulatory discourse on the EU AI Act. Around the globe, policymakers are actively pursuing initiatives—as the US Executive Order on Safe, Secure, and Trustworthy AI, or the Bletchley Declaration on AI showcase—based on the premise that the right regulatory strategy can shape trust in AI. To analyze the validity of this premise, we propose to consider the broader literature on trust in automation. On this basis, we constructed a framework to analyze 16 factors that impact trust in AI and automation more broadly. We analyze the interplay between these factors and disentangle them to determine the impact regulation can have on each. The article thus provides policymakers and legal scholars with a foundation to gauge different regulatory strategies, notably by differentiating between those strategies where regulation is more likely to also influence trust on AI (e.g., regulating the types of tasks that AI may fulfill) and those where its influence on trust is more limited (e.g., measures that increase awareness of complacency and automation biases). Our analysis underscores the critical role of nuanced regulation in shaping the human-automation relationship and offers a targeted approach to policymakers to debate how to streamline regulatory efforts for future AI governance.
 
Regulating for Trust: Can Law Establish Trust in Artificial Intelligence?
 
Journal
Regulation & Governance

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit
 

 

Browse

ArchiveCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsDocument TypesJournalsThis CollectionBy Issue DateAuthorsTitlesSubjectsDocument TypesJournals

My Account

Login

Statistics

View Usage Statistics

Contact Us | Send Feedback

Privacy policy
DSpace software copyright © 2002-2019  DuraSpace

Service from  Unit