Vis enkel innførsel

dc.contributor.authorBrandsås, Lizeth Larsson
dc.contributor.authorVaktel, Charlotte
dc.date.accessioned2022-12-20T12:16:38Z
dc.date.available2022-12-20T12:16:38Z
dc.date.issued2022
dc.identifier.urihttps://hdl.handle.net/11250/3038816
dc.descriptionMasteroppgave(MSc) in Master of Science in Business, Marketing - Handelshøyskolen BI, 2022en_US
dc.description.abstractDue to the increasing problem of Online Child Sexual Exploitation and Abuse (OCSEA) over the past years, finding a way to prevent it has become more important. There has never been easier for perpetrators to contact children and hide their identity. The problem has increased to such an extent that governments around the world consider new laws that would have to breach privacy in order to prevent the spread of inappropriate materials through digital platforms. The firms and marketing will have to play a critical role in making this change. But the change is flooded with serious tensions and challenges on all sides: for firms, governments, and law enforcement. Marketing has a pivotal role to help explain and remedy some of those challenges. This thesis aims to investigate how people would react to being surveilled on social media platforms with the intention of preventing OCSEA. If consumer’s privacy is to be severely breached, who should do the message screening: an artificially intelligent robot, or a human? The increasing health problem has contributed to the European Commission proposing a derogation that allows tech companies like SoMe platforms to derogate from the EU privacy framework in order to detect, report, and remove abuse material and prevent OCSEA. The United Nations Convention on the Rights of Children says that children under 18 years have the right to protection and the right to be heard (FN-sambandet, 2022), therefore, preventing crimes against children is important. Not only can it be harmful to the child physically, but also mentally. Several physical ailments are a consequence of such abuse. Fortunately, there are technologies like artificial intelligence (AI) that can help detect, report, and prevent this type of abuse. Even though we live in a technologically developing world, there is still a lot of scepticism towards AI. In general, research evidence indicates that humans are always preferred over AI in decision making that may deal with some type of a moral issue. We also investigate ways in which new regulations in preventing online abuse could be achieved. We want to understand if people will be more accepting of an AI robot rather than a human surveilling their online activity and messages, and in which scenarios they allow it. Their preferred choice is measured by their trust in AI, fear of being misinterpreted by AI, and fear of being discriminated against. In addition, we investigated different factors like people’s anxiety levels and privacy Page iii concerns when measuring the research question. To check if other variables affected people’s choice of conductor, we looked at several moderators such as “Age” and “Gender”. In addition, we checked for “The purpose of surveillance” to see what purposes would be accepted. An extensive survey was conducted to map people’s preferred choices. In conclusion, our study found that most people prefer an AI robot over humans to surveil their online activity and messages. It also shows that people are more accepting of surveillance if it is for the betterment of society, rather than for commercial and advertising purposes.en_US
dc.language.isoengen_US
dc.publisherHandelshøyskolen BIen_US
dc.subjectmarkedsføring marketingen_US
dc.titleHow Can Artificial Intelligence Help Prevent Online Sexual Abuse?en_US
dc.typeMaster thesisen_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel