Spelling suggestions: "subject:"fakenews"" "subject:"fakeness""
151 |
The Struggle Against Misinformation: Evaluating the Performance of Basic vs. Complex Machine Learning Models on Manipulated DataValladares Parker, Diego Gabriel January 2024 (has links)
This study investigates the application of machine learning (ML) techniques in detecting fake news, addressing the rapid spread of misinformation across social media platforms. Given the time-consuming nature of manual fact-checking, this research compares the robustness of basic machine learning models, such as Multinominal Naive Bayes classifiers, with complex models like Distil-BERT in identifying fake news. Utilizing datasets including LIAR, ISOT, and GM, this study will evaluate these models based on standard classification metrics both in single domain and cross-domain scenarios, especially when processing linguistically manipulated data. Results indicate that while complex models like Distil-BERT perform better in single-domain classifications, the Baseline models show competitive performance in cross-domain and on the manipulated dataset. However both models struggle with the manipulated dataset, highlighting a critical area for improvement in fake news detection algorithms and methods. In conclusion, the findings suggest that while both basic and complex models have their strength in certain settings, significant advancements are needed to improve against linguistic manipulations, ensuring reliable detection of fake news across varied contexts before consideration of public availability of automated classification.
|
152 |
An Exploratory Study on The Trust of Information in Social MediaChih-Yuan Chou (8630730) 17 April 2020 (has links)
This study examined the level of trust of information on social media. Specifically, I investigated the factors of performance expectancy with information-seeking motives that appear to influence the level of trust of information on various social network sites. This study utilized the following theoretical models: elaboration likelihood model (ELM), the uses and gratifications theory (UGT), the unified theory of acceptance and use of technology model (UTAUT), the consumption value theory (CVT), and the Stimulus-Organism-Response (SOR) Model to build a conceptual research framework for an exploratory study. The research investigated the extent to which information quality and source credibility influence the level of trust of information by visitors to the social network sites. The inductive content analysis on 189 respondents’ responses carefully addressed the proposed research questions and then further developed a comprehensive framework. The findings of this study contribute to the current research stream on information quality, fake news, and IT adoption as they relate to social media.
|
153 |
All Negative on the Western Front: Analyzing the Sentiment of the Russian News Coverage of Sweden with Generic and Domain-Specific Multinomial Naive Bayes and Support Vector Machines Classifiers / På västfronten intet gott: attitydanalys av den ryska nyhetsrapporteringen om Sverige med generiska och domänspecifika Multinomial Naive Bayes- och Support Vector Machines-klassificerareMichel, David January 2021 (has links)
This thesis explores to what extent Multinomial Naive Bayes (MNB) and Support Vector Machines (SVM) classifiers can be used to determine the polarity of news, specifically the news coverage of Sweden by the Russian state-funded news outlets RT and Sputnik. Three experiments are conducted. In the first experiment, an MNB and an SVM classifier are trained with the Large Movie Review Dataset (Maas et al., 2011) with a varying number of samples to determine how training data size affects classifier performance. In the second experiment, the classifiers are trained with 300 positive, negative, and neutral news articles (Agarwal et al., 2019) and tested on 95 RT and Sputnik news articles about Sweden (Bengtsson, 2019) to determine if the domain specificity of the training data outweighs its limited size. In the third experiment, the movie-trained classifiers are put up against the domain-specific classifiers to determine if well-trained classifiers from another domain perform better than relatively untrained, domain-specific classifiers. Four different types of feature sets (unigrams, unigrams without stop words removal, bigrams, trigrams) were used in the experiments. Some of the model parameters (TF-IDF vs. feature count and SVM’s C parameter) were optimized with 10-fold cross-validation. Other than the superior performance of SVM, the results highlight the need for comprehensive and domain-specific training data when conducting machine learning tasks, as well as the benefits of feature engineering, and to a limited extent, the removal of stop words. Interestingly, the classifiers performed the best on the negative news articles, which made up most of the test set (and possibly of Russian news coverage of Sweden in general).
|
154 |
When the state cannot deal with online content : Reviewing user-driven solutions that counter political disinformation on FacebookBeridzishvili, Jumber January 2020 (has links)
Online disinformation damage on the world’s democracy has been critical. Yet, states fail to handle online content harms. Due to exception from legal liability for hosted content, Facebook, used by a third of the world population, operates ‘duty-free’ along with other social media companies.Concerned with solutions, this has given rise to the idea in studies that social resistance could be one of the most effective ways for combating disinformation. However, how exactly do we resist, is an unsettled subject. Are there any socially-driven processes against disinformation happening out there?This paper aimed to identify such processes for giving a boost to theory-building around the topic. Two central evidence cases were developed: #IAmHere digital movement fighting disinformation and innovative tool ‘Who is Who’ for distinguishing fake accounts. Based on findings, I argue that efforts by even a very small part of society can have a significant impact on defeating online disinformation. This is because digital activism shares phenomenal particularities for shaping online political discourse around disinformation. Tools such as ‘Who is Who’, on the other hand, build social resilience against the issue, also giving boost digital activists for mass reporting of disinformation content. User-driven solutions have significant potential for further research.Keywords: Online disinformation; algorithms; digital activism; user-driven solutions.
|
155 |
WHITE NOISE: ONLINE DISINFORMATION AS POLITICAL DOMINANCESamantha L Seybold (16521846) 10 July 2023 (has links)
<p> </p>
<p>We cannot fully assess the normative and epistemic implications of online discourse, especially political discourse, without recognizing how it is being systematically leveraged to undermine the credibility and autonomy of those with marginalized identities. In the following chapters, I supplement social/feminist epistemological methodologies with norm theory to argue that online discourse entrenches the mechanisms of political dominance and cultural hegemony by ignoring and devaluing the experiences and struggles of marginalized individuals. Each chapter investigates a different, concrete manifestation of this dynamic. In Chapter 1, I argue that digital capitalist enterprises like Facebook facilitate the targeting of minoritized users with disproportionate instances of abuse, misinformation, and silencing. This is exemplified by the practice of using racial microtargeting to engage in Black, Indigenous, and People of Color (BIPOC) voter suppression. I contend in Chapter 2 that, given the exploitative nature of racially-microtargeted political advertising campaigns, these social media companies are ultimately morally responsible for initiating and sustaining a burgeoning digital voter suppression industry. In Chapter 3, I argue that the presence of online disinformation, in tandem with key party figures’ explicit endorsement of vicious group epistemic norms like close-mindedness and dogmatism, have directly contributed to the formation and epistemic isolation of conservative political factions in the US. Finally, I argue in Chapter 4 that social media and hostile media bias rhetoric directly reinforce sexist and racist credibility norms, effectively creating a toxic environment of misogynistic online discourse that hurts the perceived credibility of women journalists.</p>
|
156 |
DS-Fake : a data stream mining approach for fake news detectionMputu Boleilanga, Henri-Cedric 08 1900 (has links)
L’avènement d’internet suivi des réseaux sociaux a permis un accès facile et une diffusion rapide de l’information par toute personne disposant d’une connexion internet. L’une des conséquences néfastes de cela est la propagation de fausses informations appelées «fake news». Les fake news représentent aujourd’hui un enjeu majeur au regard de ces conséquences. De nombreuses personnes affirment encore aujourd’hui que sans la diffusion massive de fake news sur Hillary Clinton lors de la campagne présidentielle de 2016, Donald Trump n’aurait peut-être pas été le vainqueur de cette élection. Le sujet de ce mémoire concerne donc la détection automatique des fake news.
De nos jours, il existe un grand nombre de travaux à ce sujet. La majorité des approches présentées se basent soit sur l’exploitation du contenu du texte d’entrée, soit sur le contexte social du texte ou encore sur un mélange entre ces deux types d’approches. Néanmoins, il existe très peu d’outils ou de systèmes efficaces qui détecte une fausse information dans la vie réelle, tout en incluant l’évolution de l’information au cours du temps. De plus, il y a un manque criant de systèmes conçues dans le but d’aider les utilisateurs des réseaux sociaux à adopter un comportement qui leur permettrait de détecter les fausses nouvelles.
Afin d’atténuer ce problème, nous proposons un système appelé DS-Fake. À notre connaissance, ce système est le premier à inclure l’exploration de flux de données. Un flux de données est une séquence infinie et dénombrable d’éléments et est utilisée pour représenter des données rendues disponibles au fil du temps. DS-Fake explore à la fois l’entrée et le contenu d’un flux de données. L’entrée est une publication sur Twitter donnée au système afin qu’il puisse déterminer si le tweet est digne de confiance. Le flux de données est extrait à l’aide de techniques d’extraction du contenu de sites Web. Le contenu reçu par ce flux est lié à l’entrée en termes de sujets ou d’entités nommées mentionnées dans le texte d’entrée. DS-Fake aide également les utilisateurs à développer de bons réflexes face à toute information qui se propage sur les réseaux sociaux.
DS-Fake attribue un score de crédibilité aux utilisateurs des réseaux sociaux. Ce score décrit la probabilité qu’un utilisateur puisse publier de fausses informations. La plupart des systèmes utilisent des caractéristiques comme le nombre de followers, la localisation, l’emploi, etc. Seuls quelques systèmes utilisent l’historique des publications précédentes d’un utilisateur afin d’attribuer un score. Pour déterminer ce score, la majorité des systèmes utilisent la moyenne. DS-Fake renvoie un pourcentage de confiance qui détermine la probabilité que l’entrée soit fiable. Contrairement au petit nombre de systèmes qui utilisent l’historique des publications en ne prenant pas en compte que les tweets précédents d’un utilisateur, DS-Fake calcule le score de crédibilité sur la base des tweets précédents de tous les utilisateurs. Nous avons renommé le score de crédibilité par score de légitimité. Ce dernier est basé sur la technique de la moyenne Bayésienne. Cette façon de calculer le score permet d’atténuer l’impact des résultats des publications précédentes en fonction du nombre de publications dans l’historique. Un utilisateur donné ayant un plus grand nombre de tweets dans son historique qu’un autre utilisateur, même si les tweets des deux sont tous vrais, le premier utilisateur est plus crédible que le second. Son score de légitimité sera donc plus élevé. À notre connaissance, ce travail est le premier qui utilise la moyenne Bayésienne basée sur l’historique de tweets de toutes les sources pour attribuer un score à chaque source.
De plus, les modules de DS-Fake ont la capacité d’encapsuler le résultat de deux tâches, à savoir la similarité de texte et l’inférence en langage naturel hl(en anglais Natural Language Inference). Ce type de modèle qui combine ces deux tâches de TAL est également nouveau pour la problématique de la détection des fake news. DS-Fake surpasse en termes de performance toutes les approches de l’état de l’art qui ont utilisé FakeNewsNet et qui se sont basées sur diverses métriques.
Il y a très peu d’ensembles de données complets avec une variété d’attributs, ce qui constitue un des défis de la recherche sur les fausses nouvelles. Shu et al. ont introduit en 2018 l’ensemble de données FakeNewsNet pour résoudre ce problème. Le score de légitimité et les tweets récupérés ajoutent des attributs à l’ensemble de données FakeNewsNet. / The advent of the internet, followed by online social networks, has allowed easy access and rapid propagation of information by anyone with an internet connection. One of the harmful consequences of this is the spread of false information, which is well-known by the term "fake news". Fake news represent a major challenge due to their consequences. Some people still affirm that without the massive spread of fake news about Hillary Clinton during the 2016 presidential campaign, Donald Trump would not have been the winner of the 2016 United States presidential election. The subject of this thesis concerns the automatic detection of fake news.
Nowadays, there is a lot of research on this subject. The vast majority of the approaches presented in these works are based either on the exploitation of the input text content or the social context of the text or even on a mixture of these two types of approaches. Nevertheless, there are only a few practical tools or systems that detect false information in real life, and that includes the evolution of information over time. Moreover, no system yet offers an explanation to help social network users adopt a behaviour that will allow them to detect fake news.
In order to mitigate this problem, we propose a system called DS-Fake. To the best of our knowledge, this system is the first to include data stream mining. A data stream is a sequence of elements used to represent data elements over time. This system explores both the input and the contents of a data stream. The input is a post on Twitter given to the system that determines if the tweet can be trusted. The data stream is extracted using web scraping techniques. The content received by this flow is related to the input in terms of topics or named entities mentioned in the input text. This system also helps users develop good reflexes when faced with any information that spreads on social networks.
DS-Fake assigns a credibility score to users of social networks. This score describes how likely a user can publish false information. Most of the systems use features like the number of followers, the localization, the job title, etc. Only a few systems use the history of a user’s previous publications to assign a score. To determine this score, most systems use the average. DS-Fake returns a percentage of confidence that determines how likely the input is reliable. Unlike the small number of systems that use the publication history by taking into account only the previous tweets of a user, DS-Fake calculates the credibility score based on the previous tweets of all users. We renamed the credibility score legitimacy score. The latter is based on the Bayesian averaging technique. This way of calculating the score allows attenuating the impact of the results from previous posts according to the number of posts in the history. A user who has more tweets in his history than another user, even if the tweets of both are all true, the first user is more credible than the second. His legitimacy score will therefore be higher. To our knowledge, this work is the first that uses the Bayesian average based on the post history of all sources to assign a score to each source.
DS-Fake modules have the ability to encapsulate the output of two tasks, namely text similarity and natural language inference. This type of model that combines these two NLP tasks is also new for the problem of fake news detection.
There are very few complete datasets with a variety of attributes, which is one of the challenges of fake news research. Shu et al. introduce in 2018 the FakeNewsNet dataset to tackle this issue. Our work uses and enriches this dataset. The legitimacy score and the retrieved tweets from named entities mentioned in the input texts add features to the FakeNewsNet dataset. DS-Fake outperforms all state-of-the-art approaches that have used FakeNewsNet and that are based on various metrics.
|
157 |
What have we learned from the economic impact of the Covid-19 outbreak? Critical analysis of economic factors and recommendations for the futureMarco Franco, Julio Emilio 18 October 2021 (has links)
Tesis por compendio / [ES] El brote de Coronavirus SARS-CoV-2 representó un reto para la economía, la vida social y los servicios sanitarios. Justo cuando más se necesitaba la información para la planificación económica, los servicios de vigilancia y notificación no fueron capaces de ofrecer, a pesar de esfuerzos extraordinarios, datos consistentes, como así reconocieron los propios orga-nismos gubernamentales.
Esta tesis incluye tres artículos publicados durante los brotes de COVID-19 y una investi-gación adicional fuera del conjunto de publicaciones. La investigación tiene como objetivo general proporcionar información a través de estimaciones alternativas. Para ello se han utilizado varias metodologías, entre ellas los modelos matemáticos de predicción epidemio-lógica, el Mejor Ajuste de Valores Relacionados (BARV), los análisis de diferentes encues-tas y la metodología bibliométrica, aprovechando u ofreciendo alternativas a los métodos bayesianos más complejos, las simulaciones de Monte Carlo o las cadenas de Markov, aun-que algunos datos obtenidos se apoyan parcialmente en estas metodologías. Cada artículo aborda un tema esencial relacionado con la pandemia COVID-19.
La primera publicación se centra en los datos epidemiológicos básicos. Se refiere al primer brote de COVID-19, estimando su duración, incidencia, prevalencia, tasa de fallecimientos sobre infectados (IFR) y tasa de fallecimientos sobre casos (confirmados) (CFR). Como dato destacado de este trabajo, se previó que la seroprevalencia era demasiado baja para que la inmunidad de rebaño desempeñara algún papel. Aunque el valor obtenido fue aproxima-damente un 2% inferior al que demostró posteriormente un estudio poblacional (Instituto Carlos III), la conclusión sobre la inmunidad de rebaño no cambió, y los resultados confir-maron la idoneidad del enfoque.
La segunda publicación se centró en las cuestiones legales y las noticias falsas, analizando la reticencia de la población a vacunarse, el impacto de las falsas noticas en estos comporta-mientos, las posibilidades legales de hacer obligatoria la vacuna y las posibles acciones contra los profesionales de la salud que publican noticias falsas. La principal conclusión fue que, aunque se podría encontrar una vía legal para la obligatoriedad de la vacunación, y para la persecución gubernamental de las noticias falsas, la opinión ciudadana parece prefe-rir que la administración no tome la iniciativa, por lo que se recomienda promover y fomen-tar la concienciación ciudadana.
La tercera publicación presentó un modelo matemático simplificado para la estimación del coste-efectividad de la vacuna contra la COVID-19. Se actualizan los datos de dos fechas para la estimación de los costes directos para el sistema sanitario debidos a la COVID-19, computando el coste por ciudadano y por Producto Interior Bruto (PIB), así como el coste-efectividad de la vacuna. La estimó razón de coste-efectividad incremental (RCEI) para dos dosis por persona a un coste de 30 euros cada dosis (incluida la administración). Asumien-do al 70% de efectividad y con el 70% de la población vacunada resultó ser de 5.132 euros (4.926 - 5.276) por año de vida ajustado a calidad (AVAC) ganado (a 17 de febrero de 2021). Una cifra que desciende cada día de pandemia activa.
Se incluyó una investigación adicional, no incorporada en el conjunto de artículos, centrada en los recursos humanos y la educación. Se analizaron los temas preocupan al personal de primera línea, es decir, a la enfermería, y cómo la pandemia ha afectado a sus publicaciones científicas, como índice de los cambios en el clima laboral que sufre este colectivo. Median-te un estudio bibliométrico comparativo entre las publicaciones de 2019 y 2020, se analizó el cambio de temas y ámbitos como reflejo del impacto del COVID-19 en el personal de enfermería. Así se comprobó que, en los ámbitos de enfermería de atención especializada, y sobre todo e / [CA] El brot de Coronavirus SARS-CoV-2 va representar un repte per a l'economia, la vida soci-al i els serveis sanitaris. Quan més es necessitava la informació per a la planificació econò-mica, malgrat esforços extraordinaris, els serveis de vigilància i notificació no van ser capa-ços d'oferir dades consistents, com així van reconèixer els mateixos organismes governa-mentals.
Aquesta tesi inclou tres articles publicats durant els brots de COVID-19 i una investigació addicional fora del conjunt de publicacions. La investigació té com a objectiu general pro-porcionar informació a través d'estimacions alternatives. Per a això s'han utilitzat diverses metodologies, entre elles els models matemàtics de predicció epidemiològica, el Millor Ajust de Valors Relacionats (BARV), les anàlisis de diferents enquestes i la metodologia bibliomètrica, aprofitant o oferint opcions alternatives als mètodes bayesians més comple-xos, les simulacions de Montecarlo o les cadenes de Markov, tot i que algunes dades obtin-gudes es recolzen parcialment en aquestes metodologies. Cada article aborda un tema essen-cial relacionat amb la pandèmia COVID-19.
La primera publicació se centra en les dades epidemiològiques bàsiques. Es refereix al pri-mer brot de COVID-19, calculant la seua durada, incidència, prevalença, taxa de defuncions sobre infectats (IFR) i taxa de defuncions sobre casos (confirmats) (CFR). Com a dada des-tacada d'aquest treball, es va preveure que la seroprevalença era massa baixa perquè la im-munitat de ramat exercirà algun paper. Tot i que el valor obtingut va ser aproximadament un 2% inferior al demostrat posteriorment en un estudi poblacional (Institut Carles III), la conclusió sobre la immunitat de ramat no va canviar, i els resultats van confirmar la idoneï-tat de l'enfocament.
La segona publicació es va centrar en les qüestions legals i les notícies falses, analitzant la reticència de la població a vacunar-se, l'impacte de les falses notícies en aquests comporta-ments, les possibilitats legals de fer obligatòria la vacuna i les possibles accions contra els professionals de la salut que publiquen notícies falses. La principal conclusió va ser que, tot i que es podria trobar una via legal per l'obligatorietat de la vacunació, i per la persecució governamental de les notícies falses, l'opinió ciutadana sembla preferir que l'administració no prenga la iniciativa, per la qual cosa es recomana promoure i fomentar la conscienciació ciutadana.
La tercera publicació va presentar un model matemàtic simplificat per a l'estimació del cost-efectivitat de la vacuna contra la COVID-19. S'actualitzen les dades de dues dates per a l'estimació dels costos directes per al sistema sanitari deguts a la COVID-19, computant el cost per ciutadà i per Producte Interior Brut (PIB), així com el cost-efectivitat de la vacuna. La va estimar raó de cost-efectivitat incremental (RCEI) per dues dosis per persona a un cost de 30 euros cada dosi (inclosa l'administració). Assumint al 70% d'efectivitat i amb el 70% de la població vacunada va resultar ser de 5.132 euros (4.926 - 5.276) per any de vida ajustat a qualitat (AVAQ) (a 17 de febrer de 2021). Una xifra que descendeix cada dia de pandèmia activa.
Es va afegir una investigació addicional, no inclosa en el conjunt d'articles, centrada en els recursos humans i l'educació. Es van analitzar els temes que preocupen al personal de pri-mera línia, és a dir, a la infermeria, i com la pandèmia ha afectat les seues publicacions cien-tífiques, com a índex dels canvis en el clima laboral que pateix aquest col·lectiu. Mitjançant un estudi bibliomètric comparatiu entre les publicacions de 2019 i 2020, es va analitzar el canvi de temes i camps com a reflex de l'impacte del COVID-19 en el personal d'infermeria. Així es va comprovar que en els àmbits d'infermeria d'atenció especialitzada, i sobretot en atenció primària, els principals problemes detectat / [EN] The SARS-CoV-2 Coronavirus outbreak has posed a challenge to the economy, social life, and health services. Just when information was most needed for economic planning, moni-toring, and reporting services were unable, despite extraordinary efforts to provide con-sistent data, as government agencies themselves acknowledged.
This thesis includes three articles published during the COVID-19 outbreaks and additional research outside the publication set. The overall aim of the research is to provide infor-mation through alternative estimates. Several methodologies have been used, including mathematical models for epidemiological prediction, Best Adjustment of Related Values (BARV), analyses of different surveys and bibliometric methodology, taking advantage of or offering an alternative to, more complex options such as Bayesian methods, Monte Carlo simulations or Markov chains, although some data obtained are partially supported by these methodologies. Each article addresses a key issue related to the COVID-19 pandemic.
The first publication focuses on basic epidemiological data. It refers to the first outbreak of COVID-19, estimating its duration, incidence, prevalence, Infection Fatality Rate (IFR) and Case Fatality Rate (CFR). As a highlight of this work, the seroprevalence was anticipated to be too low for herd immunity to play a role. Although the value obtained was approximate-ly 2% lower than that subsequently demonstrated by a population-based study (Instituto Carlos III), the conclusion on herd immunity remained unchanged, and the results con-firmed the appropriateness of the approach.
The second publication focuses on legal issues and fake news, analysing reluctance to be vaccinated in the population, the impact of fake news on these behaviours, the legal possi-bilities of making vaccination mandatory, and possible actions against health professionals who publish fake news. The main conclusion was that, although a legal avenue could be found for mandatory vaccination and for governmental prosecution of fake news, public opinion seems to prefer that the authorities do not take the initiative, therefore it recom-mends promoting and encouraging public awareness.
The third publication presented a simplified mathematical model for estimating the cost-effectiveness of the COVID-19 vaccine. Data from two dates were obtained for the estimation of the direct costs to the health system due to COVID-19, computing the cost per citizen and per Gross Domestic Product (GDP), as well as the cost-effectiveness of the vaccine. The estimated incremental cost-effectiveness ratio (ICER) was calculated for two doses per person at a cost of 30 euros per dose (including administration). Assuming 70% effectiveness and with 70% of the population vaccinated, it was found to be 5,132 euros (4,926 - 5,276) per quality-adjusted life year (QALY) gained (as of 17 February 2021). The figure decreases with each day of the active pandemic.
Additional research not included in the set of articles focuses on human resources and education. It analyses the concerns of frontline staff, i.e., nurses, and how the pandemic has affected their scientific publications, as an index of the changes in the work climate experienced by this group. Through a comparative bibliometric study of publications in 2019 and 2020, the change in topics and fields was analysed, as a reflection of the impact of COVID-19 on nursing staff. It was found that in the fields of specialised care nursing and above all in primary care, the main problems detected are those related to protective measures and psychological factors, while the publications of nursing staff in nursing homes showed an increase in topics related to management and organisation.
Finally, some aspects of the implementation of telecommuting and distance learning have been reviewed. Some of the boosts in this field resulting from the pandemic could be very useful and remain in the future, such as the incorporation of telewo / Marco Franco, JE. (2021). What have we learned from the economic impact of the Covid-19 outbreak? Critical analysis of economic factors and recommendations for the future [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/174883 / Compendio
|
158 |
Educación en información comunicación: análisis y propuesta para capacitar el consumo responsable de la informaciónCliment Ferrer, Juan José 06 September 2022 (has links)
[ES] En el momento en que decidí dedicarme académicamente al mundo de la comunicación empecé a ser consciente del poder que el ámbito de la información y la comunicación poseen a diferentes niveles. En aquel momento las tecnologías empezaban a despuntar en su gran evolución y revolución que vivimos a día de hoy, pero todavía no éramos conscientes, al menos del todo, de los resultados que pueden provocar determinadas informaciones poco rigurosas, más todavía, debido a su gran difusión y elevadas posibilidades de transmisión a gran velocidad a día de hoy.
Dentro de esta tesis se analiza la evolución y revolución que las tecnologías digitales de la información y la comunicación han aportado al ámbito de los medios y como han logrado cambiar de paradigma. Se analiza como la ciudadanía cambia su rol de ser un agente pasivo que recibe informaciones a ser activo y con responsabilidades al poder ser también creador y transmisor de informaciones.
Se realiza un análisis de las técnicas y estrategias que utilizan los profesionales de los medios de comunicación en el tratamiento de los hechos y datos para poder transformarlos en informaciones y como muchas veces este tratamiento se utiliza con intereses ocultos para favorecer objetivos políticos, económicos o bien de opinión social.
Se concluye en esta tesis con una propuesta de currículum docente para poder llevar a cabo una educación en información y comunicación que nos ayude a facilitar el aprendizaje hacia una futura ciudadanía para que posea un pensamiento crítico ante cualquier tipo de información que reciba desde cualquier tipo de medio de comunicación y transmisión de la información. / [CA] En el moment en què vaig decidir dedicar-me acadèmicament al món de la comunicació vaig començar a ser conscient del poder que l'àmbit de la informació i la comunicació posseeixen a diferents nivells. En aquell moment les tecnologies començaven a despuntar en la seva gran evolució i revolució que vivim avui dia, però encara no érem conscients, almenys del tot, dels resultats que poden provocar determinades informacions poc rigoroses, més encara, a causa de la seva gran difusió i elevades possibilitats de transmissió a gran velocitat avui dia.
Dins d'aquesta tesi s'analitza l'evolució i revolució que les noves tecnologies de la informació i la comunicació han aportat a l'àmbit dels mitjans i com han aconseguit canviar de paradigma. S'analitza com la ciutadania canvia el seu rol de ser un agent passiu que rep informacions a ser actiu i amb responsabilitats en poder ser també creador i transmissor d'informacions.
Es realitza una anàlisi de les tècniques i estratègies que utilitzen els professionals dels mitjans de comunicació en el tractament dels fets i dades per a poder transformar-los en informacions i com moltes vegades aquest tractament s'utilitza amb interessos ocults per a afavorir objectius polítics, econòmics o bé d'opinió social.
Aquesta tesi conclou amb una proposta de currículum docent per a poder dur a terme una educació en informació i comunicació que ens ajudi a facilitar l'aprenentatge cap a una futura ciutadania perquè posseeixi un pensament crític davant qualsevol mena d'informació que rebi des de qualsevol mena de mitjà de comunicació i transmissió de la informació. / [EN] When I decided to dedicate myself academically to the world of communication, I began to be aware of the power that the field of information and communication has at different levels. At that time, technologies were still to emerge in their great evolution and revolution that we are living today, but we were not yet aware, at least not fully, of the results certain less rigorous information can cause, even more so, due to its wide dissemination and high possibilities of transmission at high speed today.
This thesis analyzes the evolution and revolution that the new information and communication technologies have brought to the media and how they have changed the paradigm. It analyzes how citizens change their role from being passive agents who receive information to being active and with responsibilities as they can also be creators and transmitters of information.
An analysis is made of the techniques and strategies used by media professionals in the treatment of facts and data in order to transform them into information and how this treatment is often used with hidden interests to favor political, economic or social opinion objectives.
This thesis concludes with a proposal for a teaching curriculum in order to carry out an education in information and communication that helps us to facilitate learning towards a future citizenry that possesses critical thinking in the face of any type of information received from any type of media and information transmission. / Climent Ferrer, JJ. (2022). Educación en información comunicación: análisis y propuesta para capacitar el consumo responsable de la información [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/185505
|
Page generated in 0.0462 seconds