• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 9
  • 1
  • Tagged with
  • 16
  • 16
  • 8
  • 7
  • 6
  • 5
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

Understanding Persuasive Communication: The Role of Heuristic Thinking, Context, and Debiasing in the Evaluation of Online Political Information

Nerino, Valentina 19 July 2023 (has links)
The technological revolution brought about by the Internet has affected numerous aspects of our everyday life, including the way we conceive and perform information and communication practices. In this sense, the Internet has drastically reduced the cost of data accessibility and data sharing, by increasing both the amount of information freely available for consultation and the speed at which it is possible to access and exchange it. Social Networking Platforms (SNPs) – such as Facebook, Twitter, and YouTube – are of particular interest in this sense, having become fully-fledged information environments themselves in the last decade, with an increasing number of individuals identifying such platforms as their main source for news, including political news. Given such an informative role, SNPs have attracted increasing attention regarding the proliferation of propaganda – definable as information that, though not necessarily fabricated, is specifically designed to rally public support and disparage opposing views rather than serve an informative purpose. Even though compelling evidence has revealed the widespread tendency of political actors to employ automated tools to propagate this kind of content on SNPs, scholars have also demonstrated how social media users themselves play a crucial role in this proliferation. Thus, understanding how individuals interact with and are affected by these information environments is of crucial importance to untangle the functioning of this political communication phenomenon. In particular, identifying the mechanisms underlying the evaluation of political propaganda has major relevance not only for the assessment of the persuasive power this communication practice exerts, but also for the development of countermeasures able to mitigate its impact on social media users’ deliberations. Therefore, the main aim of this doctoral research is to unravel the functioning of this multidimensional phenomenon by adopting a cognitive-sociological approach that draws on the Dual Process Model of Cognition when theoretically conceptualizing and empirically assessing the effectiveness of this communication practice. By addressing the specific cognitive mechanisms that regulate information processing, the goal is to assess whether reasoning style – and, in particular, heuristic thinking – affects judgments concerning the validity and shareability of political propaganda, thus enhancing its circulation. Given the goals of this project and the characteristics of the phenomenon under investigation, a mixed-methods approach encompassing computational social science techniques and experimental design has been adopted to explore the role played by reasoning in this kind of persuasion processes. The first set of methods has been employed to collect and analyze social media data on the 2019 European Parliament elections to assess the extent to which automation (i.e., political bots) and heuristic-based persuasion strategies have been employed consistently by political actors. Results indicate the presence of both factors among the political communication techniques deployed for this electoral event. Building on these findings, the second set of methods has been employed to design and implement a Discrete Choice Experiment (DCE). By means of this experimental technique – which is designed to elicit individual preferences in contexts where revealed preference data is unavailable – it was possible to assess how individuals interpret and respond to online propaganda messages and which factors affect this evaluation process. Two main factors have been explored in this sense, namely message features of online propaganda and the cognitive context in which information processing takes place. Specifically, this experimental assessment concerned six different informational cues (i.e., source, endorsement, popularity, emotional salience, stereotyping, and moral valence) and two different cognitive contexts (i.e., a “cognitive scarcity” and a “debiasing” one, in which heuristic thinking and analytic reasoning were prompted respectively). Findings highlighted that both message features and the cognitive context in which evaluation is performed affect the likelihood of considering political messages valid and shareable on SNPs. Moreover, they also indicate that individual characteristics ascribable to supra-individual, cultural factors (e.g., perception of diversity) moderate such an effect. Overall, this research project and its outputs contribute to the existing literature on online propaganda by exploring the mechanisms underlying the persuasion processes triggered by this political communication practice. By adopting a research approach that puts recipients at the center of the investigation without neglecting the social context they are part of, this work proposes a suitable way to investigate the functioning of online propaganda and, thus, assess its actual effectiveness.
12

Separating Tweets from Croaks : Detecting Automated Twitter Accounts with Supervised Learning and Synthetically Constructed Training Data / : Automationsdetektion av Twitter-konton med övervakad inlärning och syntetiskt konstruerad träningsmängd

Teljstedt, Erik Christopher January 2016 (has links)
In this thesis, we have studied the problem of detecting automated Twitter accounts related to the Ukraine conflict using supervised learning. A striking problem with the collected data set is that it was initially lacking a ground truth. Traditionally, supervised learning approaches rely on manual annotation of training sets, but it incurs tedious work and becomes expensive for large and constantly changing collections. We present a novel approach to synthetically generate large amounts of labeled Twitter accounts for detection of automation using a rule-based classifier. It significantly reduces the effort and resources needed and speeds up the process of adapting classifiers to changes in the Twitter-domain. The classifiers were evaluated on a manually annotated test set of 1,000 Twitter accounts. The results show that rule-based classifier by itself achieves a precision of 94.6% and a recall of 52.9%. Furthermore, the results showed that classifiers based on supervised learning could learn from the synthetically generated labels. At best, the these machine learning based classifiers achieved a slightly lower precision of 94.1% compared to the rule-based classifier, but at a significantly better recall of 93.9% / Detta exjobb har undersökt problemet att detektera automatiserade Twitter-konton relaterade till Ukraina-konflikten genom att använda övervakade maskininlärningsmetoder. Ett slående problem med den insamlade datamängden var avsaknaden av träningsexempel. I övervakad maskininlärning brukar man traditionellt manuellt märka upp en träningsmängd. Detta medför dock långtråkigt arbete samt att det blir dyrt förstora och ständigt föränderliga datamängder. Vi presenterar en ny metod för att syntetiskt generera uppmärkt Twitter-data (klassifieringsetiketter) för detektering av automatiserade konton med en regel-baseradeklassificerare. Metoden medför en signifikant minskning av resurser och anstränging samt snabbar upp processen att anpassa klassificerare till förändringar i Twitter-domänen. En utvärdering av klassificerare utfördes på en manuellt uppmärkt testmängd bestående av 1,000 Twitter-konton. Resultaten visar att den regelbaserade klassificeraren på egen hand uppnår en precision på 94.6% och en recall på 52.9%. Vidare påvisar resultaten att klassificerare baserat på övervakad maskininlärning kunde lära sig från syntetiskt uppmärkt data. I bästa fall uppnår dessa maskininlärningsbaserade klassificerare en något lägre precision på 94.1%, jämfört med den regelbaserade klassificeraren, men med en betydligt bättre recall på 93.9%.
13

Discovering and Mitigating Social Data Bias

January 2017 (has links)
abstract: Exabytes of data are created online every day. This deluge of data is no more apparent than it is on social media. Naturally, finding ways to leverage this unprecedented source of human information is an active area of research. Social media platforms have become laboratories for conducting experiments about people at scales thought unimaginable only a few years ago. Researchers and practitioners use social media to extract actionable patterns such as where aid should be distributed in a crisis. However, the validity of these patterns relies on having a representative dataset. As this dissertation shows, the data collected from social media is seldom representative of the activity of the site itself, and less so of human activity. This means that the results of many studies are limited by the quality of data they collect. The finding that social media data is biased inspires the main challenge addressed by this thesis. I introduce three sets of methodologies to correct for bias. First, I design methods to deal with data collection bias. I offer a methodology which can find bias within a social media dataset. This methodology works by comparing the collected data with other sources to find bias in a stream. The dissertation also outlines a data collection strategy which minimizes the amount of bias that will appear in a given dataset. It introduces a crawling strategy which mitigates the amount of bias in the resulting dataset. Second, I introduce a methodology to identify bots and shills within a social media dataset. This directly addresses the concern that the users of a social media site are not representative. Applying these methodologies allows the population under study on a social media site to better match that of the real world. Finally, the dissertation discusses perceptual biases, explains how they affect analysis, and introduces computational approaches to mitigate them. The results of the dissertation allow for the discovery and removal of different levels of bias within a social media dataset. This has important implications for social media mining, namely that the behavioral patterns and insights extracted from social media will be more representative of the populations under study. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2017
14

Clustering Web Users by Mouse Movement to Detect Bots and Botnet Attacks

Morgan, Justin L 01 March 2021 (has links) (PDF)
The need for website administrators to efficiently and accurately detect the presence of web bots has shown to be a challenging problem. As the sophistication of modern web bots increases, specifically their ability to more closely mimic the behavior of humans, web bot detection schemes are more quickly becoming obsolete by failing to maintain effectiveness. Though machine learning-based detection schemes have been a successful approach to recent implementations, web bots are able to apply similar machine learning tactics to mimic human users, thus bypassing such detection schemes. This work seeks to address the issue of machine learning based bots bypassing machine learning-based detection schemes, by introducing a novel unsupervised learning approach to cluster users based on behavioral biometrics. The idea is that, by differentiating users based on their behavior, for example how they use the mouse or type on the keyboard, information can be provided for website administrators to make more informed decisions on declaring if a user is a human or a bot. This approach is similar to how modern websites require users to login before browsing their website; which in doing so, website administrators can make informed decisions on declaring if a user is a human or a bot. An added benefit of this approach is that it is a human observational proof (HOP); meaning that it will not inconvenience the user (user friction) with human interactive proofs (HIP) such as CAPTCHA, or with login requirements
15

Understanding the behaviour and influence of automated social agents

Gilani, Syed Zafar ul Hussan January 2018 (has links)
Online social networks (OSNs) have seen a remarkable rise in the presence of automated social agents, or social bots. Social bots are the new computing viral, that are surreptitious and clever. What facilitates the creation of social agents is the massive human user-base and business-supportive operating model of social networks. These automated agents are injected by agencies, brands, individuals, and corporations to serve their work and purpose; utilising them for news and emergency communication, marketing, social activism, political campaigning, and even spam and spreading malicious content. Their influence was recently substantiated by coordinated social hacking and computational political propaganda. The thesis of my dissertation argues that automated agents exercise a profound impact on OSNs that transforms into an array of influence on our society and systems. However, latent or veiled, these agents can be successfully detected through measurement, feature extraction and finely tuned supervised learning models. The various types of automated agents can be further unravelled through unsupervised machine learning and natural language processing, to formally inform the populace of their existence and impact.
16

Scraping bot detection using machine learning / Botdetektering med hjälp av maskininlärning

Dezfoli, Hamta, Newman, Joseph January 2022 (has links)
Illegitimate acquisition and use of data is a problematic issue faced by many organizations operating web servers on the internet today. Despite frameworks of rules to prevent ”scraping bots” from carrying out this action, they have developed advanced methods to continue taking data. Following research into what the problem is and how it can be handled, this report identifies and evaluates how machine learning can be used to detect bots. Since developing and testing a machine learning solution proved difficult, an alternative solution was also developed aiming to polarize (separate) bot and human traffic through behavioral analysis. This particular solution to optimize traffic session classification is presented and discussed, as well as, other key findings which can help in detecting and preventing these unwanted visitors. / Olaglig insamling och användning av data är problematiskt för många organisationer som idag använder sig av webbservrar på internet. Trots ramar av regler för att förhindra ”scraping bots” så har de utvecklat avancerade sätt att komma åt data. Efter forskning om vad problemet är och hur det kan hanteras, identifierar och evaluerar denna rapport hur maskininlärning kan användas för att detektera bottar. Då utvecklingen och testningen av en lösning med hjälp av maskininlärning visade sig bli svårt, utvecklades en alternativ lösning med målet att polarisera (separera) bottrafik och legitim trafik. Denna lösning presenteras och diskuteras i rapporten tillsammans med andra nyckelresultat som kan hjälpa till att upptäcka och förhindra dessa oönskade besökare.

Page generated in 0.1147 seconds