• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 3
  • Tagged with
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Understanding Social Media Users' Perceptions of Trigger and Content Warnings

Gupta, Muskan 18 October 2023 (has links)
The prevalence of distressing content on social media raises concerns about users' mental well-being, prompting the use of trigger warnings (TW) and content warnings (CW). However, varying practices across platforms indicate a lack of clarity among users regarding these warnings. To gain insight into how users experience and use these warnings, we conducted interviews with 15 regular social media users. Our findings show that users generally have a positive view of warnings, but there are differences in how they understand and use them. Challenges related to using TW/CW on social media emerged, making it a complex decision when dealing with such content. These challenges include determining which topics require warnings, navigating logistical complexities related to usage norms, and considering the impact of warnings on social media engagement. We also found that external factors, such as how the warning and content are presented, and internal factors, such as the viewer's mindset, tolerance, and level of interest, play a significant role in the user's decision-making process when interacting with content that has TW/CW. Participants emphasized the need for better education on warnings and triggers in social media and offered suggestions for improving warning systems. They also recommended post-trigger support measures. The implications and future directions include promoting author accountability, introducing nudges and interventions, and improving post-trigger support to create a more trauma-informed social media environment. / Master of Science / In today's world of social media, you often come across distressing content that can affect your mental well-being. To address this concern, platforms and content authors use something called trigger warnings (TW) and content warnings (CW) to alert users about potentially upsetting content. However, different platforms have different ways of using these warnings, which can be confusing for users. To better understand how people like you experience and use these warnings, we conducted interviews with 15 regular social media users. What we found is that, in general, users have a positive view of these warnings, but there are variations in how they understand and use them. Using TW/CW on social media can be challenging because it involves deciding which topics should have warnings, dealing with the different rules on each platform, and thinking about how warnings affect people's engagement with content. We also discovered that various factors influence how people decide whether to engage with warned content. These factors include how the warning and content are presented and the person's own mindset, tolerance for certain topics, and level of interest. Our study participants highlighted the need for better education about warnings and triggers on social media. They also had suggestions for improving how these warnings are used and recommended providing support to users after they encounter distressing content. Looking ahead, our findings suggest the importance of holding content creators accountable, introducing helpful tools and strategies, and providing better support to make social media a more empathetic and supportive place for all users.
2

Investigating the Effects of Nudges for Facilitating the Use of Trigger Warnings and Content Warnings

Altland, Emily Caroline 27 June 2024 (has links)
Social media can trigger past traumatic memories in viewers when posters post sensitive content. Strict content moderation and blocking/reporting features do not work when triggers are nuanced and the posts may not violate site guidelines. Viewer-side interventions exist to help filter and hide certain content but these put all the responsibility on the viewer and typically act as 'aftermath interventions'. Trigger and content warnings offer a unique solution giving viewers the agency to scroll past content they may want to avoid. However, there is a lack of education and awareness for posters for how to add a warning and what topics may require one. We conducted this study to determine if poster-side interventions such as a nudge algorithm to add warnings to sensitive posts would increase social media users' knowledge and understanding of how and when to add trigger and content warnings. To investigate the effectiveness of a nudge algorithm, we designed the TWIST (Trigger Warning Includer for Sensitive Topics) app. The TWIST app scans tweet content to determine whether a TW/CW is needed and if so, nudges the social media poster to add one with an example of what it may look like. We then conducted a 4-part mixed methods study with 88 participants. Our key findings from this study include (1) Nudging social media users to add TW/CW educates them on triggering topics and raises their awareness when posting in the future, (2) Social media users can learn how to add a trigger/content warning through using a nudge app, (3) Researchers grew in understanding of how a nudge algorithm like TWIST can change people's behavior and perceptions, and (4) We provide empirical evidence of the effectiveness of such interventions (even in short-time use). / Master of Science / Social media can trigger past traumatic memories in viewers when posters post sensitive content. Strict content moderation and blocking/reporting features do not work when triggers are nuanced and the posts may not violate site guidelines. Viewer-side interventions exist to help filter and hide certain content but these put all the responsibility on the viewer and typically act as 'aftermath interventions'. Trigger and content warnings offer a unique solution giving viewers the agency to scroll past content they may want to avoid. However, there is a lack of education and awareness for posters for how to add a warning and what topics may require one. We conducted this study to determine if poster-side interventions such as a nudge algorithm to add warnings to sensitive posts would increase social media users' knowledge and understanding of how and when to add trigger and content warnings. To investigate the effectiveness of a nudge algorithm, we designed the TWIST (Trigger Warning Includer for Sensitive Topics) app then conducted a 4-part mixed methods study with 88 participants. Our findings from this study show that nudging social media users to add TW/CW educates them on triggering topics and raise their awareness when posting in the future. It also shows social media users can learn how to add a trigger/content warning through using a nudge app.
3

[pt] DETECÇÃO DE CONTEÚDO SENSÍVEL EM VIDEO COM APRENDIZADO PROFUNDO / [en] SENSITIVE CONTENT DETECTION IN VIDEO WITH DEEP LEARNING

PEDRO VINICIUS ALMEIDA DE FREITAS 09 June 2022 (has links)
[pt] Grandes quantidades de vídeo são carregadas em plataformas de hospedagem de vídeo a cada minuto. Esse volume de dados apresenta um desafio no controle do tipo de conteúdo enviado para esses serviços de hospedagem de vídeo, pois essas plataformas são responsáveis por qualquer mídia sensível enviada por seus usuários. Nesta dissertação, definimos conteúdo sensível como sexo, violencia fisica extrema, gore ou cenas potencialmente pertubadoras ao espectador. Apresentamos um conjunto de dados de vídeo sensível para classificação binária de vídeo (se há conteúdo sensível no vídeo ou não), contendo 127 mil vídeos anotados, cada um com seus embeddings visuais e de áudio extraídos. Também treinamos e avaliamos quatro modelos baseline para a tarefa de detecção de conteúdo sensível em vídeo. O modelo com melhor desempenho obteve 99 por cento de F2-Score ponderado no nosso subconjunto de testes e 88,83 por cento no conjunto de dados Pornography-2k. / [en] Massive amounts of video are uploaded on video-hosting platforms every minute. This volume of data presents a challenge in controlling the type of content uploaded to these video hosting services, for those platforms are responsible for any sensitive media uploaded by their users. There has been an abundance of research on methods for developing automatic detection of sensitive content. In this dissertation, we define sensitive content as sex, extreme physical violence, gore, or any scenes potentially disturbing to the viewer. We present a sensitive video dataset for binary video classification (whether there is sensitive content in the video or not), containing 127 thousand tagged videos, Each with their extracted audio and visual embeddings. We also trained and evaluated four baseline models for the sensitive content detection in video task. The best performing model achieved 99 percent weighed F2-Score on our test subset and 88.83 percent on the Pornography-2k dataset.

Page generated in 0.0328 seconds