• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 6
  • 2
  • 1
  • Tagged with
  • 10
  • 10
  • 5
  • 5
  • 4
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

[pt] O LIVRE MERCADO DA (DES)INFORMAÇÃO E A MODERAÇÃO DE CONTEÚDO ONLINE / [en] THE (DIS)INFORMATION MARKETPLACE AND ONLINE CONTENT MODERATION

PRISCILLA REGINA DA SILVA 11 March 2022 (has links)
[pt] A popularização da Internet no final da década de 1990 inaugurou uma nova dinâmica de comunicação e de consumo de informações. Indivíduos que antes consumiam notícias editoradas de forma passiva tornaram-se também produtores de informações, que, por sua vez, passaram a ser disponibilizadas de forma instantânea e abundante. Propõe-se observar a dinâmica regulatória da liberdade de expressão e acesso à informação nesses novos espaços, inicialmente privados, mas que assumem características de arena pública, de interlocução e deliberação pautado em um modelo não Estatal. O objeto principal dessa análise é o chamado fenômeno da desinformação que, apesar de não se iniciar com e por causa da Internet, encontra nela o ambiente propício para sua rápida disseminação e com potencial de viralização. Por meio de estudo da legislação pertinente e com uso de metodologia qualitativa, procedeu-se à análise de doutrina interdisciplinar sobre o tema. O estudo agrupou as legislações e projetos de lei brasileiros que incidem na regulamentação do tema. Constatou a carência de recursos legais para regular o problema. Observou possibilidades de interlocução entre esferas legislativas, sociais, tecnológicas e mercadológicas. Concluiu que parcerias reguladas se efetivam no cotidiano e que para tal, é necessário também, postura participativa e envolvimento de diferentes atores sociais. / [en] The popularization of the Internet in late 1990s started a new dynamic of communication and information access. Individuals who used to consume edited news in a passive manner also became information producers, turning information available instantly and abundantly. The present work proposes to observe the regulatory dynamics of freedom of expression and information access in these new spaces, which are initially private, but assume characteristics of a public arena, of dialogue and deliberation based on a non-State model. The main object of this analysis is the so-called phenomenon of disinformation which, despite not starting with and because of the Internet, finds in it the favorable environment for its rapid dissemination and the potential to become viral. Through a study of the relevant legislation and using a qualitative methodology, the interdisciplinary doctrine on the subject was analyzed. The study grouped Brazilian legislation and bills that affect the regulation of the subject, found the lack of legal resources to regulate the problem, noted possibilities for dialogue between legislative, social, technological and market spheres. It concluded that regulated partnerships are effective in everyday life and, for this, it is also necessary to have a participative posture and the involvement of different social actors.
2

Making Sense of Digital Content Moderation from the Margins

Fernandes, Margaret Burke 10 June 2022 (has links)
This dissertation, Making Sense of Digital Content Moderation from the Margins, examines how content creators who are marginalized by race, sexuality, gender, ethnicity, and disability understand their experiences of content moderation on the social media platform TikTok. Using critical interface and narrative-based inquiry methods with six marginalized content creators on TikTok, I argue that marginalized creators navigate the opaque content moderation infrastructure of TikTok by drawing on their embodied experiences. The key research questions ask how these content creators interpret TikTok's platform policies and processes through their interactions on the app and how these interpretations influence content creation on TikTok and how creators feel about moderation in the absence of platform transparency about how content is moderated. To answer these questions, I conducted narrative-driven interviews with six TikTok creators and analyzed these stories alongside online testimonials in eight Change.org petitions. My analysis revealed that lack of transparency around TikTok's algorithmic curation and moderation contributes to content creators feeling alienated, exploited, frustrated, and unwelcome on the platform and influences content creators to adapt their content to avoid moderation, oftentimes by self-censoring themselves and aspects of their marginalized identities. Over time, the accumulation of content moderation micro-interactions diminishes the ability of marginalized content creators to trust content moderation processes. My analysis also shows how TikTok's user experience design and opaque content moderation practices contribute to an affective platform environment in which creators are compelled to speak out and across creator networks about such gaps in experience and platform policy. I conclude with a discussion of how my findings about content moderation and transparency contribute to conversations in writing-related scholarship, especially as it pertains to writing assessment, technical communication, and algorithmic research methodologies. / Doctor of Philosophy / In recent years, marginalized content creators on TikTok have sounded the alarm about the way that the platform's content moderation and algorithmic recommendation disadvantages marginalized creators. This dissertation, Making Sense of Digital Content Moderation from the Margins, examines how content creators who are marginalized by race, sexuality, gender, ethnicity, and disability understand their experiences of content moderation on the social media platform TikTok. The key research questions ask how these content creators interpret TikTok's platform policies and processes through their interactions on the app and how these interpretations influence content creation on TikTok and how creators feel about moderation in the absence of platform transparency about how content is moderated. To answer these questions, I conducted narrative-driven interviews with six TikTok creators and analyzed these stories alongside online testimonials. My analysis revealed that lack of transparency around TikTok's algorithmic curation and moderation contributes to content creators feeling alienated, exploited, and unwelcome on the platform and influences content creators to adapt their content to avoid moderation, oftentimes by self-censoring themselves and aspects of their marginalized identities. Moreover, I found that TikTok isolates user experiences of biased content moderation which compels creators to speak out and across creator networks about discriminatory experiences of platform policy.
3

A Multimodal Framework for Automated Content Moderation of Children's Videos

Ahmed, Syed Hammad 01 January 2024 (has links) (PDF)
Online video platforms receive hundreds of hours of uploads every minute, making manual moderation of inappropriate content impossible. The most vulnerable consumers of malicious video content are children from ages 1-5 whose attention is easily captured by bursts of color and sound. Prominent video hosting platforms like YouTube have taken measures to mitigate malicious content, but these videos often go undetected by current automated content moderation tools that are focused on removing explicit or copyrighted content. Scammers attempting to monetize their content may craft malicious children's videos that are superficially similar to educational videos, but include scary and disgusting characters, violent motions, loud music, and disturbing noises. A robust classification of malicious videos requires audio representations in addition to video features. However, recent content moderation approaches rarely employ multimodal architectures that explicitly consider non-speech audio cues. Additionally, there is a dearth of comprehensive datasets for content moderation tasks which include these audio-visual feature annotations. This dissertation addresses these challenges and makes several contributions to the problem of content moderation for children’s videos. The first contribution is identifying a set of malicious features that are harmful to preschool children but remain unaddressed and publishing a labeled dataset (Malicious or Benign) of cartoon video clips that include these features. We provide a user-friendly web-based video annotation tool which can easily be customized and used for video classification tasks with any number of ground truth classes. The second contribution is adapting state-of-the-art Vision-Language models to apply content moderation techniques on the MOB benchmark. We perform prompt engineering and an in-depth analysis of how context-specific language prompts affect the content moderation performance of different CLIP (Contrastive Language-Image Pre-training) variants. This dissertation introduces new benchmark natural language prompt templates for cartoon videos that can be used with Vision-Language models. Finally, we introduce a multimodal framework that includes the audio modality for more robust content moderation of children's cartoon videos and extend our dataset to include audio labels. We present ablations to demonstrate the enhanced performance of adding audio. The audio modality and prompt learning are incorporated while keeping the backbone modules of each modality frozen. Experiments were conducted on a multimodal version of the MOB (Malicious or Benign) dataset in both supervised and few-shot settings.
4

Countering Terrorist Content Online: Removal = Success? : A Critical Discourse Analysis of the EU Regulation 2021/784

McCarthy Hartman, Nina January 2024 (has links)
This thesis critically interrogates the underlying assumptions which legitimise the hard regulation of online platforms regarding terrorist content, by turning to the case of the EU Regulation 2021/784. Utilising qualitative critical discourse analysis, the study analyses how the EU's strategy against terrorist content online is discursively legitimised through the lens of Theo van Leeuwen's framework for discursive legitimisation strategies, focusing on moral and rational justifications. The study's empirical contribution demonstrates how the EU's strategy is legitimised primarily through public security, fundamental rights, digital economy and efficiency discourses. It contributes theoretically by highlighting how counter-terrorism measures regarding online spaces function through rationalisation and moralisation strategies which legitimise policies as reasonable and morally justifiable, when in fact they rest upon a series of contested assumptions and narratives about the threat from terrorist content. Furthermore, the study puts forward that the regulation contributes to the institutionalisation of online platforms role in countering terrorist content online and reproduces unequal power relations between large and small hosting service companies, public authorities, and individuals.
5

Understanding Social Media Users' Perceptions of Trigger and Content Warnings

Gupta, Muskan 18 October 2023 (has links)
The prevalence of distressing content on social media raises concerns about users' mental well-being, prompting the use of trigger warnings (TW) and content warnings (CW). However, varying practices across platforms indicate a lack of clarity among users regarding these warnings. To gain insight into how users experience and use these warnings, we conducted interviews with 15 regular social media users. Our findings show that users generally have a positive view of warnings, but there are differences in how they understand and use them. Challenges related to using TW/CW on social media emerged, making it a complex decision when dealing with such content. These challenges include determining which topics require warnings, navigating logistical complexities related to usage norms, and considering the impact of warnings on social media engagement. We also found that external factors, such as how the warning and content are presented, and internal factors, such as the viewer's mindset, tolerance, and level of interest, play a significant role in the user's decision-making process when interacting with content that has TW/CW. Participants emphasized the need for better education on warnings and triggers in social media and offered suggestions for improving warning systems. They also recommended post-trigger support measures. The implications and future directions include promoting author accountability, introducing nudges and interventions, and improving post-trigger support to create a more trauma-informed social media environment. / Master of Science / In today's world of social media, you often come across distressing content that can affect your mental well-being. To address this concern, platforms and content authors use something called trigger warnings (TW) and content warnings (CW) to alert users about potentially upsetting content. However, different platforms have different ways of using these warnings, which can be confusing for users. To better understand how people like you experience and use these warnings, we conducted interviews with 15 regular social media users. What we found is that, in general, users have a positive view of these warnings, but there are variations in how they understand and use them. Using TW/CW on social media can be challenging because it involves deciding which topics should have warnings, dealing with the different rules on each platform, and thinking about how warnings affect people's engagement with content. We also discovered that various factors influence how people decide whether to engage with warned content. These factors include how the warning and content are presented and the person's own mindset, tolerance for certain topics, and level of interest. Our study participants highlighted the need for better education about warnings and triggers on social media. They also had suggestions for improving how these warnings are used and recommended providing support to users after they encounter distressing content. Looking ahead, our findings suggest the importance of holding content creators accountable, introducing helpful tools and strategies, and providing better support to make social media a more empathetic and supportive place for all users.
6

"Inte i våra kanaler" : Journalistisk innehållsmoderering av kommentarsfält på sociala medier / "Not on our channels" : Journalistic content moderation of comment sections on social media

Lund Hanefjord, Malva January 2024 (has links)
The purpose of this study is to investigate the role and practices of journalism in moderating comment sections on social media. The study addresses the following questions: How do journalists determine which comments to delete and which to keep in the comment section? Why does journalism engage in moderation? What problems and solutions exist? And how do approaches differ between private news media and public service in regulating comment sections? The study is conducted using qualitative interviews as a method, based on six interviews with journalists from both private news media and public service. We have thematically analyzed the empirical material using analytical tools from discourse psychology, dividing it into three prominent interpretive repertoires. These are the journalist's democratic dilemma, the journalist's role as a content moderator, and the journalist as a protector. The analysis is supported by theory and previous research on the journalist's role in society, the changing role of journalism, the journalist's role as a content moderator, and journalism and participation. The results of the study show that all participating journalists had an editorial policy to rely on. Although the journalists reflected on the democratic factor and the public's right to freedom of speech, they felt they could moderate the comment sections as long as it was supported by their policy. All interviewees believe that moderating the comment sections is necessary to find a balance between allowing freedom of speech and democracy to flow while preventing the comment sections from being overwhelmed by hate and threats. Something they found necessary to maintain their legitimacy. Additionally, they wanted to protect their news subjects so that the public would dare to participate in the news without fear of facing hateful comments. Furthermore, it emerged that the journalists we interviewed who work for private news media were more relaxed about moderating the comment sections and removing comments. The interviewees who worked for public service were more cautious and wanted full support from the policy before removing comments.
7

Conspiracy theories and freedom of speech in the online sphere : An analysis of QAnon’s ban from Facebook and Twitter

Meyer, Stella January 2021 (has links)
At the crossroads of law, conspiracy theory research and philosophy, this thesis investigates the permanent ban of QAnon from Facebook and Twitter, determining whether their deplatforming constitutes a violation of free speech. By first conducting a content analysis of free speech legislation in Germany and the US, it becomes evident that the matter needs to be approached from an ethical perspective rather than a legal one. To this end, I am testing an ethical framework suggested by Cíbik and Hardoš (2020). Based on the concept of ethical unreasonableness, the framework will be used to determine whether QAnon is harmful and its ban was justified. The case study consists of an in-depth analysis of QAnon’s evolution, distribution and core narratives in Germany and the US, followed by an examination of Facebook and Twitter’s justifications for deplatforming all QAnon assets. The ethical framework will then be applied to selected QAnon narratives based on their prevalence in the time from February 2020 to February 2021. It becomes clear that the ethical framework at hand needs to be adjusted and is unsuitable for everyday content moderation but should still be used by the social media companies for training purposes to improve decision making. The question of whether deplatforming QAnon was a violation of free speech is not easily answered as depending on the point of view it is or is not a violation of freedom of speech. Ultimately, big social media companies need to be redefined as to their role and responsibilities in today’s societies before any content moderation measures can be adequately examined.
8

[en] CONTENT MODERATION IN SOCIAL MEDIA: FOR A REGULATION THAT PROMOTES THE FREEDOM OF EXPRESSION / [pt] MODERAÇÃO DE CONTEÚDO EM REDES SOCIAIS: POR UMA REGULAÇÃO QUE PROMOVA A LIBERDADE DE EXPRESSÃO

JULIANA LIBMAN 12 June 2023 (has links)
[pt] O presente estudo visa a apresentar uma análise acerca da atividade de moderação de conteúdo exercida pelas grandes redes sociais, como Facebook, Instagram e Twitter. Para tanto, inicialmente, serão analisados como as redes sociais alcançaram o status que têm hoje como grandes influenciadoras do discurso público e como a atividade de moderação de conteúdo se desenvolveu e se encontra na atualidade, sendo uma atividade necessária para assegurar o direito à liberdade de expressão dos usuários, que devem ser resguardos de espaços digitais tóxicos, com a proliferação de conteúdo ilícito e falso. Em seguida, serão apresentados os desafios envolvidos na atividade de moderação de conteúdo. Por fim, com base nos conceitos e problemas apresentados, será feita uma análise da melhor forma para se regular a atividade de moderação de conteúdo, em consonância com as normas do Marco Civil da Internet (Lei número 12.965/2014). / [en] The present study aims to present an analysis about the content moderation activity carried out by the major social networks, such as Facebook, Instagram and Twitter. To do so, initially, it will be analyzed how social networks reached the status they have today as major influencers of public discourse and how the content moderation activity has developed and is currently found, being a necessary activity to ensure the right to freedom of expression of users, who must be safeguarded of toxic digital spaces, with the proliferation of illicit and false content. Next, the challenges involved in the content moderation activity will be presented. Finally, based on the concepts and problems presented, an analysis will be made of the best way to regulate content moderation, in accordance with the Internet Legal Framework (Law No. 12,965/2014).
9

Examination of Social Media Algorithms’ Ability to Know User Preferences

Barrera Corrales, Daniel 02 May 2023 (has links)
No description available.
10

Digital Platform Dynamics: Governance, Market Design and AI Integration

Ilango Guru Muniasamy (19149178) 17 July 2024 (has links)
<p dir="ltr">In my dissertation, I examine the dynamics of digital platforms, starting with the governance practices of established platforms, then exploring innovative design approaches, and finally the integration of advanced AI technologies in platforms. I structure this exploration into three essays: in the first essay, I discuss moderation processes in online communities; in the second, I propose a novel design for a blockchain-based green bond exchange; and in the third, I examine how AI-based decision-making platforms can be enhanced through synthetic data generation.</p><p dir="ltr">In my first essay, I investigate the role of moderation in online communities, focusing on its effect on users' participation in community moderation. Using data from a prominent online forum, I analyze changes in users' moderation actions (upvoting and downvoting of others' content) after they experience a temporary account suspension. While I find no significant change in their upvoting behavior, my results suggest that users downvote more after their suspension. Combined with findings on lower quality and conformity with the community while downvoting, the results suggest an initial increase in hostile moderation after suspension, although these effects dissipate over time. The short-term hostility post-suspension has the potential to negatively affect platform harmony, thus revealing the complexities of disciplinary actions and their unintended consequences.</p><p dir="ltr">In the second essay, I shift from established platforms to innovations in platform design, presenting a novel hybrid green bond exchange that integrates blockchain technology with thermodynamic principles to address market volatility and regulatory uncertainty. The green bond market, despite its high growth, faces issues like greenwashing, liquidity constraints, and limited retail investor participation. To tackle these challenges, I propose an exchange framework that uses blockchain for green bond tokenization, enhancing transparency and accessibility. By conceptualizing the exchange as a thermodynamic system, I ensure economic value is conserved and redistributed, promoting stability and efficiency. I include key mechanisms in the design to conserve value in the exchange and deter speculative trading. Through simulations, I demonstrate significant improvements in market stability, liquidity, and efficiency, highlighting the effectiveness of this interdisciplinary approach and offering a robust framework for future financial system development.</p><p dir="ltr">In the third essay, I explore the integration of advanced AI technologies, focusing on how large language models (LLMs) like GPT can be adapted for specialized fields such as education policy and decision-making. To address the need for high-quality, domain-specific training data, I develop a methodology that combines agent-based simulation (ABS) with synthetic data generation and GPT fine-tuning. This enhanced model provides accurate, contextually relevant, and interpretable insights for educational policy scenarios. My approach addresses challenges such as data scarcity, privacy concerns, and the need for diverse, representative data. Experiments show significant improvements in model performance and robustness, offering policymakers a powerful tool for exploring complex scenarios and making data-driven decisions. This research advances the literature on synthetic data in AI and agent-based modeling in education, demonstrating the adaptability of large language models to specialized domains.</p>

Page generated in 0.1133 seconds