Return to search

Cyberbullying Detection on social platforms using LargeLanguage Models

Social media and platforms utilise moderation to removeunwanted content such as cyberbullying, an aggressive acttowards an individual or group that occurs over any type ofdigital technology, e.g. social platforms. However,moderating platforms manually is nearly impossible, and thedemand for automatic moderation is rising. Research ontechnical solutions for cyberbullying detection on socialplatforms is scarce and is mostly focused on MachineLearning models to detect cyberbullying without theconnection to platform moderation. This study aims toenhance the research on cyberbullying detection models byusing a GPT-3 Large Language model and reduce the gap toplatform moderation. The model is tweaked and tested todetect cyberbullying using popular cyberbullying datasetsand compared to previous Machine Learning- and LargeLanguage models using common performance metrics.Furthermore, the latency of the model is measured to test if itcan be used as an auto-moderation tool to detectcyberbullying on social platforms. The results show that themodel is on par with the previous models and that finetuning a Large Language model is the preferred way totweak the model in cyberbullying detection. Further, theresults show that Large Language models have higherlatency than Machine Learning models but can be improvedby using multiple threads and can be used as a platformmoderation tool to detect cyberbullying.

Identiferoai:union.ndltd.org:UPSALLA1/oai:DiVA.org:miun-48990
Date January 2023
CreatorsOttosson, Dan
PublisherMittuniversitetet, Institutionen för kommunikation, kvalitetsteknik och informationssystem (2023-)
Source SetsDiVA Archive at Upsalla University
LanguageEnglish
Detected LanguageEnglish
TypeStudent thesis, info:eu-repo/semantics/bachelorThesis, text
Formatapplication/pdf
Rightsinfo:eu-repo/semantics/openAccess

Page generated in 0.0021 seconds