• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Automating Hate: Exploring Toxic Reddit Norms with Google Perspective

Chevrier, Nicholas 16 March 2022 (has links)
The Canadian Online Harms Legislation (COHL) proposal identifies proactive Automated Moderation as a solution to classifying and removing online content which violates norms such as hate. Emerging automated moderation algorithms include Google Perspective, a machine learning model which scores hateful features in text content as “toxicity.” This study identifies that hateful community content norms are currently emerging on volunteer user moderation platforms such as Reddit. To operationalize these concepts, a Theoretical Framework is constructed using Gorwa’s (2019) Platform Governance models and Massanari’s (2017) overview of Toxic Technoculture communities. While previous research exploring community toxicity is discussed, there is a gap in research which analyzes the Post, Comment, and Image Meme contributions of Reddit Moderator users to hateful community content norms. As such, an analysis of the Reddit community R/Metacanada is constructed which compares the toxicity of Moderator and user contributions using Google Perspective. The results of the applied Mann-Whitney U test analysis indicate that r/Metacanada Moderators and users contribute content at similar toxicity levels. Supplementing these tests, RQ1 then structures a qualitative analysis of false negative results which may emerge in the automated classification of multi-modal image content. Identifying that hate in online memes is structured through layered Signifier and Signified elements, a critical discussion is established which interprets potential marginalizing effects of the COHL’s automated moderation applying Noble’s (2018) theory of Technological Redlining. As such, this thesis immerses itself within the contemporary context of online content regulation, drawing upon existing conceptualizations and methodological approaches, offering a critical discussion of regulating hate content using automated algorithms.

Page generated in 0.0569 seconds