Show simple item record

dc.contributor.authorDe Souza, Carlos Eduardo Caldas
dc.date.accessioned2021-11-01T10:02:13Z
dc.date.available2021-11-01T10:02:13Z
dc.date.issued2021
dc.identifier.urihttps://hdl.handle.net/11250/2826778
dc.descriptionMasteroppgave(MSc) in Master of Science in Strategic Marketing Management - Handelshøyskolen BI, 2021en_US
dc.description.abstractBoosted by the COVID-19 pandemic, the use of AI technology to collect personal information regarding the population's health has been gaining traction globally. Public authorities worldwide pinned their hopes on developing disease contacttracing apps to quickly identify and notify people who have come into contact with infected individuals. However, population engagement did not work as expected. Whereas some countries registered a moderate adoption in others, the adherence was shallow. Inspired by this discrepancy between nations, this thesis investigates the effect of fear that the algorithm may be inaccurate, reproduce bias and harm people (fear of algorithmic bias) on willingness to disclose information via an AI system. Based on risk perception and folk perception of algorithms' literature, this study hypothesises that the individuals' understanding and knowledge of AI technology and the nature of the organisation holding the data in interaction with socioeconomic conditions impact their disclosure intention. Data from an online experiment in Qualtrics with 800 adults living in high-income countries (400) and low-income countries (400) were analysed using between-subjects ANOVA, linear regressions, moderation and mediation with PROCESS, and GLM analysis. This study found strong evidence that willingness to disclose information decreases as the fear of algorithmic bias increases. This work also found statistical evidence of the mediating role of privacy concerns and the moderating role of trust in government. The results suggest that one way for policymakers to increase the acceptance of AI is to improve governance over data input in large databases to mitigate individuals' fear that the algorithms are not properly functioning.en_US
dc.language.isoengen_US
dc.publisherHandelshøyskolen BIen_US
dc.subjectmarkedsføringsledelseen_US
dc.subjectmarketing managementen_US
dc.subjectstrategisken_US
dc.subjectstrategicen_US
dc.titleWhat if AI is not that fair? - Understanding the impact of fear of algorithmic bias and AI literacy on information disclosure -en_US
dc.typeMaster thesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record