Spelling suggestions: "subject:"factchecking"" "subject:"andchecking""
1 |
Attributed Multi-Relational Attention Network for Fact-checking URL RecommendationYou, Di 11 July 2019 (has links)
To combat fake news, researchers mostly focused on detecting fake news and journalists built and maintained fact-checking sites (e.g., Snopes.com and Politifact.com). However, fake news dissemination has been greatly promoted by social media sites, and these fact-checking sites have not been fully utilized. To overcome these problems and complement existing methods against fake news, in this thesis, we propose a deep-learning based fact-checking URL recommender system to mitigate impact of fake news in social media sites such as Twitter and Facebook. In particular, our proposed framework consists of a multi-relational attentive module and a heterogeneous graph attention network to learn complex/semantic relationship between user-URL pairs, user-user pairs, and URL-URL pairs. Extensive experiments on a real-world dataset show that our proposed framework outperforms seven state-of-the-art recommendation models, achieving at least 3~5.3% improvement.
|
2 |
The Rise and Impact of Fact-Checking in U.S. CampaignsJanuary 2015 (has links)
abstract: Do fact-checks influence individuals' attitudes and evaluations of political candidates and campaign messages? This dissertation examines the influence of fact-checks on citizens' evaluations of political candidates. Using an original content analysis, I determine who conducts fact-checks of candidates for political office, who is being fact-checked, and how fact-checkers rate political candidates' level of truthfulness. Additionally, I employ three experiments to evaluate the impact of fact-checks source and message cues on voters' evaluations of candidates for political office. / Dissertation/Thesis / Doctoral Dissertation Political Science 2015
|
3 |
Attributed Multi-Relational Attention Network for Fact-checking URL RecommendationYou, Di 06 June 2019 (has links)
To combat fake news, researchers mostly focused on detecting fake news and journalists built and maintained fact-checking sites (e.g., Snopes.com and Politifact.com). However, fake news dissemination has been greatly promoted by social media sites, and these fact-checking sites have not been fully utilized. To overcome these problems and complement existing methods against fake news, in this thesis, we propose a deep-learning based fact-checking URL recommender system to mitigate impact of fake news in social media sites such as Twitter and Facebook. In particular, our proposed framework consists of a multi-relational attentive module and a heterogeneous graph attention network to learn complex/semantic relationship between user-URL pairs, user-user pairs, and URL-URL pairs. Extensive experiments on a real-world dataset show that our proposed framework outperforms seven state-of-the-art recommendation models, achieving at least 3~5.3% improvement.
|
4 |
Checkpoint : A case study of a verification project during the 2019 Indian electionSvensson, Linus January 2019 (has links)
This thesis examines the Checkpoint research project and verification initiative that was introduced to address misinformation in private messaging applications during the 2019 Indian general election. Over two months, throughout the seven phases of the election, a team of analysts verified election related misinformation spread on the closed messaging network WhatsApp. Building on new automated technology, the project introduced a WhatsApp tipline which allowed users of the application to submit content to a team of analysts that verified user-generated content in an unprecedented way. The thesis presents a detailed ethnographic account of the implementation of the verification project. Ethnographic fieldwork has been combined with a series of semi-structured interviews in which analysts are underlining the challenges they faced throughout the project. Among the challenges, this study found that India’s legal framework limited the scope of the project so that the organisers had to change approach from an editorial project to one that was research based. Another problem touched the methodology of verification. Analysts perceived the use of online verification tools as a limiting factor when verifying content, as they experienced a need for more traditional journalistic verification methods. Technology was also a limiting factor. The tipline was quickly flooded with verification requests, the majority of which were unverifiable, and the team had to sort the queries manually. Existing technology such as image match check could be further implemented to deal more efficiently with multiple queries in future projects.
|
5 |
Le Web politique : l'espace médiatique des candidats de la présidentielle 2012 / Political Web : the candidates’s media space during the French 2012 presidential electionGoldberger-Bagalino, Laura 21 December 2017 (has links)
À l'instar de la psychotechnique des industries culturelles, savoir manier l’émotion pour toujours mieux capter l’attention d’un électeur dans la sphère de l'infotainment reste l’un des premiers objectifs d'une stratégie de campagne. Cependant, le Web politique a fait émerger de nouvelles formes de communication dans les méthodes de persuasion électorale : l’impact du capital médiatique d’un candidat se joue en ligne prolongeant celui du terrain. Sur Internet, les politiques, efficacement secondés par des experts de l’image chargés de façonner leur identité, leur réputation et leur influence, ont découvert de nouveaux territoires pour amplifier leur message, se promotionner 24 heures sur 24 ou mobiliser en temps réel des militants et récolter des fonds. Contournant les médias traditionnels pour ne pas être déstabilisé par les éditorialistes, un responsable politique s’exposera plus généreusement sur les réseaux sociaux pour devenir « rédacteur en chef » de sa propre campagne. En immersion de 2011 à 2017 sur le site de microblogging Twitter qui n’utilise que 140 caractères par message, j'essaie au travers de ma thèse de déterminer la place et la fonction de ce « média social », ainsi que son poids dans la construction de l’espace médiatique des prétendants à l’Élysée, notamment lors de la présidentielle française de 2012. Quels sont les enjeux des candidats sur le Web et que nous révèlent-ils sur les mises en scène du pouvoir ? Comment les professionnels des cellules de communication politique ont-ils pris possession de ces nouveaux outils qui accélèrent la temporalité des logiques institutionnelles et qui offrent aux militants un espace de réactivité aussi performant que celui d’un journaliste pour corriger les imprécisions des promesses de campagne ? Quelques éléments de réponses se trouvent dans les mots-clés : défiance, fact-checking, storytelling, Big Data, modélisations prédictives, irrévérence, infotainment, riposte-party et social-TV et dans les chiffres qui leur sont associés. / According to the psycho-technological model of the cultural industries, the ability to manage emotion to get voters’ constant attention in the infotainment sphere, remains one of the first objective in an electoral campaign strategy. However, the political Web contributes to the emergence of new forms of communication in electoral persuasion methods. The impact of a candidate’s media capital plays an important part online and as an extension offline. On the internet, the political class, effectively assisted by technical design experts whose job is specifically to deal with the profile of their identity, their reputation and their influence, have discovered new territories to amplify their message and to promote 24 hours a day, or to mobilize supporters in real time to raise funds. Circumventing traditional media rather than taking the risk of being destabilized by an editorialist, a political manager will choose to be more present on social media to become editor-in-chief of his or her own campaign. Totally immersed from 2011 to 2017 in website Twitter, to practice microblogging (140 characters), I try in my thesis to identify the position and the function of this « social media » as well as its value in the construction of presidential candidates, including during the French 2012 presidential election.What are the issues of the candidates who are present on the web sites and what do they reveal to us about the representation of power? How do the professionals of communication cells (the « cellcoms ») manage to take possession of these new communications tools that accelerate the temporality of institutional logics and offer to supporters an equivalent space to correct the imprecisions of campaign pledges?I suggest some answers to these questions that can be found in these keywords: suspicion, fact-checking, storytelling, Big Data, predictive modeling, irreverence, infotainment, riposte-party and social-TV as well as the associated figures presented in this thesis.
|
6 |
Content-based automatic fact checkingOrthlieb, Teo 12 1900 (has links)
La diffusion des Fake News sur les réseaux sociaux est devenue un problème central ces dernières années. Notamment, hoaxy rapporte que les efforts de fact checking prennent généralement 10 à 20 heures pour répondre à une fake news, et qu'il y a un ordre de magnitude en plus de fake news que de fact checking. Le fact checking automatique pourrait aider en accélérant le travail humain et en surveillant les tendances dans les fake news. Dans un effort contre la désinformation, nous résumons le domaine de Fact Checking Automatique basé sur le contenu en 3 approches: les modèles avec aucune connaissances externes, les modèles avec un Graphe de Connaissance et les modèles avec une Base de Connaissance. Afin de rendre le Fact Checking Automatique plus accessible, nous présentons pour chaque approche une architecture efficace avec le poids en mémoire comme préoccupation, nous discutons aussi de comment chaque approche peut être appliquée pour faire usage au mieux de leur charactéristiques. Nous nous appuyons notamment sur la version distillée du modèle de langue BERT tinyBert, combiné avec un partage fort des poids sur 2 approches pour baisser l'usage mémoire en préservant la précision. / The spreading of fake news on social media has become a concern in recent years. Notably, hoaxy found that fact checking generally takes 10 to 20 hours to respond to a fake news, and that there is one order of magnitude more fake news than fact checking. Automatic fact checking could help by accelerating human work and monitoring trends in fake news. In the effort against disinformation, we summarize content-based automatic fact-checking into 3 approaches: models with no external knowledge, models with a Knowledge Graph and models with a Knowledge Base. In order to make Automatic Fact Checking more accessible, we present for each approach an effective architecture with memory footprint in mind and also discuss how they can be applied to make use of their different characteristics. We notably rely on distilled version of the BERT language model tinyBert, combined with hard parameter sharing on two approaches to lower memory usage while preserving the accuracy.
|
7 |
Internet e cidadania: o estímulo ao debate político por meio do jornalismo fact-checking: um estudo de caso do projeto “Truco!” / Internet and citizenship: the incentive for political debate through fact-checking journalism: the Truco!’s project caseConceição, Desirèe Luíse Lopes 21 February 2018 (has links)
Submitted by Filipe dos Santos (fsantos@pucsp.br) on 2018-04-06T12:55:35Z
No. of bitstreams: 1
Desirèe Luíse Lopes Conceição.pdf: 2748185 bytes, checksum: c467bc5ad86bb59233d0a5e34244958c (MD5) / Made available in DSpace on 2018-04-06T12:55:35Z (GMT). No. of bitstreams: 1
Desirèe Luíse Lopes Conceição.pdf: 2748185 bytes, checksum: c467bc5ad86bb59233d0a5e34244958c (MD5)
Previous issue date: 2018-02-21 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - CAPES / This dissertation aims to analyze the political information produced and disseminated by the digital platform “Truco!”, which is Agência Pública’s fact-checking project developed for the 2014 election in order to identify and ascertain the political debate taking place. The methodology adopted is based on the network policy concept of the author Manuel Castells, and consists of an elaboration for primary data collection, defined from the interactions and linked to the initial proposal of this research. The results recognize qualitative investigative journalism work with political participation activities, and it’s possible to relate them to the idea that the internet has elements to contribute to citizen education / A dissertação tem por objetivo analisar a produção e divulgação de informação política na plataforma digital “Truco!”, um projeto de fact-checking da Agência Pública desenvolvido para as eleições de 2014, além de identificar e averiguar o debate político ocorrido por meio da iniciativa. A metodologia adotada é baseada no conceito de política em rede do autor Manuel Castells. A técnica metodológica consistiu na elaboração de indicadores para a coleta de dados primários, definidos a partir de uma análise piloto e da identificação de padrões de interação relacionados à proposta inicial da pesquisa. Os resultados permitem identificar um trabalho de jornalismo investigativo qualitativo, além da presença de atividades de colaboração e participação política, o que aponta à concepção de que a internet contém elementos para contribuir com a formação para a cidadania
|
8 |
Computational Journalism: from Answering Question to Questioning Answers and Raising Good QuestionsWu, You January 2015 (has links)
<p>Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.</p> / Dissertation
|
9 |
Fake news : Kan korrekt information motverka lögner?Eriksson, Joakim, Afanaseva, Anastasiya January 2018 (has links)
Sveriges regering och SÄPO har identifierat fake news som ett hot mot demokratin. I denna studie undersöker vi om fake news påverkar individer, trots att de vid samma tillfälle erhåller korrekt information inom ämnet. Detta gjordes genom en enkätundersökning på studenter vid Uppsala universitet. Vi fann att erhållandet av korrekt information inte är tillräckligt för att motverka effekten av att exponeras för falsk information. De studenter som fick läsa en mening med falsk information var 15 procentenheter mer sannolika att svara att de anser att staten lägger för mycket resurser på invandringen jämfört med kontrollgruppen. Resultatet tyder på att politiker, organisationer och privatpersoner kan dra nytta av att sprida fake news, att de kan göra så anonymt, och att faktagranskning ensamt inte kan stävja problemet med fake news. / The Swedish government and the Swedish Security Service have identified fake news as a threat to democracy. In this study, we investigate if fake news affect individuals, even though they receive correct information regarding the subject simultaneously. This was accomplished through handing out a survey to students at Uppsala University. We found that obtaining correct information is insufficient to counteract the effects of being exposed to fake news. The students who read a sentence with false information were 15 percentage points more likely to answer that they believe that the Swedish government allocates too much resources towards immigration compared to the control group. The result indicate that politicians, organizations and individuals can take advantage of spreading fake news, that they can do so anonymously, and that fact checking alone cannot solve the problem of fake news.
|
10 |
Explainable Fact Checking by Combining Automated Rule Discovery with Probabilistic Answer Set ProgrammingJanuary 2018 (has links)
abstract: The goal of fact checking is to determine if a given claim holds. A promising ap- proach for this task is to exploit reference information in the form of knowledge graphs (KGs), a structured and formal representation of knowledge with semantic descriptions of entities and relations. KGs are successfully used in multiple appli- cations, but the information stored in a KG is inevitably incomplete. In order to address the incompleteness problem, this thesis proposes a new method built on top of recent results in logical rule discovery in KGs called RuDik and a probabilistic extension of answer set programs called LPMLN.
This thesis presents the integration of RuDik which discovers logical rules over a given KG and LPMLN to do probabilistic inference to validate a fact. While automatically discovered rules over a KG are for human selection and revision, they can be turned into LPMLN programs with a minor modification. Leveraging the probabilistic inference in LPMLN, it is possible to (i) derive new information which is not explicitly stored in a KG with a probability associated with it, and (ii) provide supporting facts and rules for interpretable explanations for such decisions.
Also, this thesis presents experiments and results to show that this approach can label claims with high precision. The evaluation of the system also sheds light on the role played by the quality of the given rules and the quality of the KG. / Dissertation/Thesis / Masters Thesis Computer Science 2018
|
Page generated in 0.0539 seconds