Vis enkel innførsel

dc.contributor.authorFelzmann, Heike
dc.contributor.authorFosch Villaronga, Eduard
dc.contributor.authorLutz, Christoph
dc.contributor.authorTamò-Larrieux, Aurelia
dc.date.accessioned2019-08-07T13:36:51Z
dc.date.available2019-08-07T13:36:51Z
dc.date.created2019-06-11T10:13:05Z
dc.date.issued2019
dc.identifier.issn2053-9517
dc.identifier.urihttp://hdl.handle.net/11250/2607488
dc.description.abstractTransparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect to this requirement by focusing on the significance of contextual and performative factors in the implementation of transparency. We show that human–computer interaction and human-robot interaction literature do not provide clear results with respect to the benefits of transparency for users of artificial intelligence technologies due to the impact of a wide range of contextual factors, including performative aspects. We conclude by integrating the information- and explanation-based approach to transparency with the critical contextual approach, proposing that transparency as required by the General Data Protection Regulation in itself may be insufficient to achieve the positive goals associated with transparency. Instead, we propose to understand transparency relationally, where information provision is conceptualized as communication between technology providers and users, and where assessments of trustworthiness based on contextual factors mediate the value of transparency communications. This relational concept of transparency points to future research directions for the study of transparency in artificial intelligence systems and should be taken into account in policymaking.nb_NO
dc.language.isoengnb_NO
dc.publisherSagenb_NO
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleTransparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concernsnb_NO
dc.typeJournal articlenb_NO
dc.typePeer reviewednb_NO
dc.description.versionpublishedVersionnb_NO
dc.source.volumeforthcomingnb_NO
dc.source.journalBig Data and Societynb_NO
dc.identifier.doi10.1177/2053951719860542
dc.identifier.cristin1703858
dc.relation.projectNorges forskningsråd: 247725nb_NO
dc.relation.projectNorges forskningsråd: 275347nb_NO
cristin.unitcode158,9,0,0
cristin.unitnameInstitutt for kommunikasjon og kultur
cristin.ispublishedfalse
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal