Return to search

Youth, social media, and online safety: a holistic approach towards detecting and mitigating risks in online conversations

Social media platforms have become a popular and inexpensive way for people to communicate with millions of others. However, this increased usage has also led to an increase in risks associated with it, such as cyberbullying, trolling, misinformation, and privacy abuse. Previous research in this field has mainly focused on isolated aspects of online risk detection, which can limit the effectiveness of these studies. To address these issues, this dissertation presents a more holistic approach to detecting and reducing harmful and abusive behavior online.

To gain an initial understanding of the problem, we first present a mixed-method study of messages and media files shared in private conversations by youth to understand the risky communication experienced by them. We use these findings to determine which features can automatically detect unsafe private conversations and whether social media platforms can implement such a system given the recent move towards end-to-end encryption. We present an ensemble machine learning classifier to detect risks in private messages and how to incorporate child safety by design.

In the second part of this thesis, we will explore ways to stay ahead of hate and toxicity, given the changing online behaviors. Toxic language changes over time, with aggressors inventing new insults and abusive terms that frequently target certain vulnerable communities, including women and minorities. We develop automated systems that, given an initial lexicon of toxic speech, can learn new and emerging toxic words by observing conversations on social networks.

Lastly, we examine the cross-platform implications of employing risk detection systems online. Most of the research focuses only on malicious activity that occurs on one platform, which does not allow us to get a full picture of the problem. Users are obviously not bound to a single platform but can migrate to other online services for example, anecdotal evidence shows that once hateful users are banned from Twitter, they often move to Gab, an alternative social network with an open lack of moderation marketed as protection of "free speech". Consequently, we argue that moderation efforts should extend beyond safeguarding users on individual platforms and account for the potential adverse consequences of banning users from prominent platforms.

Identiferoai:union.ndltd.org:bu.edu/oai:open.bu.edu:2144/48853
Date23 May 2024
CreatorsAli, Shiza
ContributorsStringhini, Gianluca
Source SetsBoston University
Languageen_US
Detected LanguageEnglish
TypeThesis/Dissertation

Page generated in 0.0021 seconds