Many, if not all people have online social accounts (OSAs) on an online community (OC) such as Facebook (Meta), Twitter (X), Instagram (Meta), Mastodon, Nostr. OCs enable quick and easy interaction with friends, family, and even online communities to share information about. There is also a dark side to Ocs, where users with malicious intent join OC platforms with the purpose of criminal activities such as spreading fake news/information, cyberbullying, propaganda, phishing, stealing, and unjust enrichment. These criminal activities are especially concerning when harming minors. Detection and mitigation are needed to protect and help OCs and stop these criminals from harming others. Many solutions exist; however, they are typically focused on a single category of malicious intent detection rather than an all-encompassing solution. To answer this challenge, we propose the first steps of a framework for analyzing and identifying malicious intent in OCs that we refer to as malicious mntent detection framework (MIDF). MIDF is an extensible proof-of-concept that uses machine learning techniques to enable detection and mitigation. The framework will first be used to detect malicious users using solely relationships and then can be leveraged to create a suite of malicious intent vector detection models, including phishing, propaganda, scams, cyberbullying, racism, spam, and bots for open-source online social networks, such as Mastodon, and Nostr.
Identifer | oai:union.ndltd.org:unt.edu/info:ark/67531/metadc2332602 |
Date | 05 1900 |
Creators | Fausak, Andrew Raymond |
Contributors | Tunc, Cihan, Rattani, Ajita, Morozov, Kirill |
Publisher | University of North Texas |
Source Sets | University of North Texas |
Language | English |
Detected Language | English |
Type | Thesis or Dissertation |
Format | Text |
Rights | Public, Fausak, Andrew Raymond, Copyright, Copyright is held by the author, unless otherwise noted. All rights Reserved. |
Page generated in 0.0028 seconds