• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • Tagged with
  • 7
  • 7
  • 4
  • 4
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

[pt] O LIVRE MERCADO DA (DES)INFORMAÇÃO E A MODERAÇÃO DE CONTEÚDO ONLINE / [en] THE (DIS)INFORMATION MARKETPLACE AND ONLINE CONTENT MODERATION

PRISCILLA REGINA DA SILVA 11 March 2022 (has links)
[pt] A popularização da Internet no final da década de 1990 inaugurou uma nova dinâmica de comunicação e de consumo de informações. Indivíduos que antes consumiam notícias editoradas de forma passiva tornaram-se também produtores de informações, que, por sua vez, passaram a ser disponibilizadas de forma instantânea e abundante. Propõe-se observar a dinâmica regulatória da liberdade de expressão e acesso à informação nesses novos espaços, inicialmente privados, mas que assumem características de arena pública, de interlocução e deliberação pautado em um modelo não Estatal. O objeto principal dessa análise é o chamado fenômeno da desinformação que, apesar de não se iniciar com e por causa da Internet, encontra nela o ambiente propício para sua rápida disseminação e com potencial de viralização. Por meio de estudo da legislação pertinente e com uso de metodologia qualitativa, procedeu-se à análise de doutrina interdisciplinar sobre o tema. O estudo agrupou as legislações e projetos de lei brasileiros que incidem na regulamentação do tema. Constatou a carência de recursos legais para regular o problema. Observou possibilidades de interlocução entre esferas legislativas, sociais, tecnológicas e mercadológicas. Concluiu que parcerias reguladas se efetivam no cotidiano e que para tal, é necessário também, postura participativa e envolvimento de diferentes atores sociais. / [en] The popularization of the Internet in late 1990s started a new dynamic of communication and information access. Individuals who used to consume edited news in a passive manner also became information producers, turning information available instantly and abundantly. The present work proposes to observe the regulatory dynamics of freedom of expression and information access in these new spaces, which are initially private, but assume characteristics of a public arena, of dialogue and deliberation based on a non-State model. The main object of this analysis is the so-called phenomenon of disinformation which, despite not starting with and because of the Internet, finds in it the favorable environment for its rapid dissemination and the potential to become viral. Through a study of the relevant legislation and using a qualitative methodology, the interdisciplinary doctrine on the subject was analyzed. The study grouped Brazilian legislation and bills that affect the regulation of the subject, found the lack of legal resources to regulate the problem, noted possibilities for dialogue between legislative, social, technological and market spheres. It concluded that regulated partnerships are effective in everyday life and, for this, it is also necessary to have a participative posture and the involvement of different social actors.
2

Making Sense of Digital Content Moderation from the Margins

Fernandes, Margaret Burke 10 June 2022 (has links)
This dissertation, Making Sense of Digital Content Moderation from the Margins, examines how content creators who are marginalized by race, sexuality, gender, ethnicity, and disability understand their experiences of content moderation on the social media platform TikTok. Using critical interface and narrative-based inquiry methods with six marginalized content creators on TikTok, I argue that marginalized creators navigate the opaque content moderation infrastructure of TikTok by drawing on their embodied experiences. The key research questions ask how these content creators interpret TikTok's platform policies and processes through their interactions on the app and how these interpretations influence content creation on TikTok and how creators feel about moderation in the absence of platform transparency about how content is moderated. To answer these questions, I conducted narrative-driven interviews with six TikTok creators and analyzed these stories alongside online testimonials in eight Change.org petitions. My analysis revealed that lack of transparency around TikTok's algorithmic curation and moderation contributes to content creators feeling alienated, exploited, frustrated, and unwelcome on the platform and influences content creators to adapt their content to avoid moderation, oftentimes by self-censoring themselves and aspects of their marginalized identities. Over time, the accumulation of content moderation micro-interactions diminishes the ability of marginalized content creators to trust content moderation processes. My analysis also shows how TikTok's user experience design and opaque content moderation practices contribute to an affective platform environment in which creators are compelled to speak out and across creator networks about such gaps in experience and platform policy. I conclude with a discussion of how my findings about content moderation and transparency contribute to conversations in writing-related scholarship, especially as it pertains to writing assessment, technical communication, and algorithmic research methodologies. / Doctor of Philosophy / In recent years, marginalized content creators on TikTok have sounded the alarm about the way that the platform's content moderation and algorithmic recommendation disadvantages marginalized creators. This dissertation, Making Sense of Digital Content Moderation from the Margins, examines how content creators who are marginalized by race, sexuality, gender, ethnicity, and disability understand their experiences of content moderation on the social media platform TikTok. The key research questions ask how these content creators interpret TikTok's platform policies and processes through their interactions on the app and how these interpretations influence content creation on TikTok and how creators feel about moderation in the absence of platform transparency about how content is moderated. To answer these questions, I conducted narrative-driven interviews with six TikTok creators and analyzed these stories alongside online testimonials. My analysis revealed that lack of transparency around TikTok's algorithmic curation and moderation contributes to content creators feeling alienated, exploited, and unwelcome on the platform and influences content creators to adapt their content to avoid moderation, oftentimes by self-censoring themselves and aspects of their marginalized identities. Moreover, I found that TikTok isolates user experiences of biased content moderation which compels creators to speak out and across creator networks about discriminatory experiences of platform policy.
3

Countering Terrorist Content Online: Removal = Success? : A Critical Discourse Analysis of the EU Regulation 2021/784

McCarthy Hartman, Nina January 2024 (has links)
This thesis critically interrogates the underlying assumptions which legitimise the hard regulation of online platforms regarding terrorist content, by turning to the case of the EU Regulation 2021/784. Utilising qualitative critical discourse analysis, the study analyses how the EU's strategy against terrorist content online is discursively legitimised through the lens of Theo van Leeuwen's framework for discursive legitimisation strategies, focusing on moral and rational justifications. The study's empirical contribution demonstrates how the EU's strategy is legitimised primarily through public security, fundamental rights, digital economy and efficiency discourses. It contributes theoretically by highlighting how counter-terrorism measures regarding online spaces function through rationalisation and moralisation strategies which legitimise policies as reasonable and morally justifiable, when in fact they rest upon a series of contested assumptions and narratives about the threat from terrorist content. Furthermore, the study puts forward that the regulation contributes to the institutionalisation of online platforms role in countering terrorist content online and reproduces unequal power relations between large and small hosting service companies, public authorities, and individuals.
4

Understanding Social Media Users' Perceptions of Trigger and Content Warnings

Gupta, Muskan 18 October 2023 (has links)
The prevalence of distressing content on social media raises concerns about users' mental well-being, prompting the use of trigger warnings (TW) and content warnings (CW). However, varying practices across platforms indicate a lack of clarity among users regarding these warnings. To gain insight into how users experience and use these warnings, we conducted interviews with 15 regular social media users. Our findings show that users generally have a positive view of warnings, but there are differences in how they understand and use them. Challenges related to using TW/CW on social media emerged, making it a complex decision when dealing with such content. These challenges include determining which topics require warnings, navigating logistical complexities related to usage norms, and considering the impact of warnings on social media engagement. We also found that external factors, such as how the warning and content are presented, and internal factors, such as the viewer's mindset, tolerance, and level of interest, play a significant role in the user's decision-making process when interacting with content that has TW/CW. Participants emphasized the need for better education on warnings and triggers in social media and offered suggestions for improving warning systems. They also recommended post-trigger support measures. The implications and future directions include promoting author accountability, introducing nudges and interventions, and improving post-trigger support to create a more trauma-informed social media environment. / Master of Science / In today's world of social media, you often come across distressing content that can affect your mental well-being. To address this concern, platforms and content authors use something called trigger warnings (TW) and content warnings (CW) to alert users about potentially upsetting content. However, different platforms have different ways of using these warnings, which can be confusing for users. To better understand how people like you experience and use these warnings, we conducted interviews with 15 regular social media users. What we found is that, in general, users have a positive view of these warnings, but there are variations in how they understand and use them. Using TW/CW on social media can be challenging because it involves deciding which topics should have warnings, dealing with the different rules on each platform, and thinking about how warnings affect people's engagement with content. We also discovered that various factors influence how people decide whether to engage with warned content. These factors include how the warning and content are presented and the person's own mindset, tolerance for certain topics, and level of interest. Our study participants highlighted the need for better education about warnings and triggers on social media. They also had suggestions for improving how these warnings are used and recommended providing support to users after they encounter distressing content. Looking ahead, our findings suggest the importance of holding content creators accountable, introducing helpful tools and strategies, and providing better support to make social media a more empathetic and supportive place for all users.
5

Conspiracy theories and freedom of speech in the online sphere : An analysis of QAnon’s ban from Facebook and Twitter

Meyer, Stella January 2021 (has links)
At the crossroads of law, conspiracy theory research and philosophy, this thesis investigates the permanent ban of QAnon from Facebook and Twitter, determining whether their deplatforming constitutes a violation of free speech. By first conducting a content analysis of free speech legislation in Germany and the US, it becomes evident that the matter needs to be approached from an ethical perspective rather than a legal one. To this end, I am testing an ethical framework suggested by Cíbik and Hardoš (2020). Based on the concept of ethical unreasonableness, the framework will be used to determine whether QAnon is harmful and its ban was justified. The case study consists of an in-depth analysis of QAnon’s evolution, distribution and core narratives in Germany and the US, followed by an examination of Facebook and Twitter’s justifications for deplatforming all QAnon assets. The ethical framework will then be applied to selected QAnon narratives based on their prevalence in the time from February 2020 to February 2021. It becomes clear that the ethical framework at hand needs to be adjusted and is unsuitable for everyday content moderation but should still be used by the social media companies for training purposes to improve decision making. The question of whether deplatforming QAnon was a violation of free speech is not easily answered as depending on the point of view it is or is not a violation of freedom of speech. Ultimately, big social media companies need to be redefined as to their role and responsibilities in today’s societies before any content moderation measures can be adequately examined.
6

[en] CONTENT MODERATION IN SOCIAL MEDIA: FOR A REGULATION THAT PROMOTES THE FREEDOM OF EXPRESSION / [pt] MODERAÇÃO DE CONTEÚDO EM REDES SOCIAIS: POR UMA REGULAÇÃO QUE PROMOVA A LIBERDADE DE EXPRESSÃO

JULIANA LIBMAN 12 June 2023 (has links)
[pt] O presente estudo visa a apresentar uma análise acerca da atividade de moderação de conteúdo exercida pelas grandes redes sociais, como Facebook, Instagram e Twitter. Para tanto, inicialmente, serão analisados como as redes sociais alcançaram o status que têm hoje como grandes influenciadoras do discurso público e como a atividade de moderação de conteúdo se desenvolveu e se encontra na atualidade, sendo uma atividade necessária para assegurar o direito à liberdade de expressão dos usuários, que devem ser resguardos de espaços digitais tóxicos, com a proliferação de conteúdo ilícito e falso. Em seguida, serão apresentados os desafios envolvidos na atividade de moderação de conteúdo. Por fim, com base nos conceitos e problemas apresentados, será feita uma análise da melhor forma para se regular a atividade de moderação de conteúdo, em consonância com as normas do Marco Civil da Internet (Lei número 12.965/2014). / [en] The present study aims to present an analysis about the content moderation activity carried out by the major social networks, such as Facebook, Instagram and Twitter. To do so, initially, it will be analyzed how social networks reached the status they have today as major influencers of public discourse and how the content moderation activity has developed and is currently found, being a necessary activity to ensure the right to freedom of expression of users, who must be safeguarded of toxic digital spaces, with the proliferation of illicit and false content. Next, the challenges involved in the content moderation activity will be presented. Finally, based on the concepts and problems presented, an analysis will be made of the best way to regulate content moderation, in accordance with the Internet Legal Framework (Law No. 12,965/2014).
7

Examination of Social Media Algorithms’ Ability to Know User Preferences

Barrera Corrales, Daniel 02 May 2023 (has links)
No description available.

Page generated in 0.0503 seconds