Spelling suggestions: "subject:"explainable AI"" "subject:"explainabile AI""
51 |
Primary stage Lung Cancer Prediction with Natural Language Processing-based Machine Learning / Tidig lungcancerprediktering genom maskininlärning för textbehandlingSadek, Ahmad January 2022 (has links)
Early detection reduces mortality in lung cancer, but it is also considered as a challenge for oncologists and for healthcare systems. In addition, screening modalities like CT-scans come with undesired effects, many suspected patients are wrongly diagnosed with lung cancer. This thesis contributes to solve the challenge of early lung cancer detection by utilizing unique data consisting of self-reported symptoms. The proposed method is a predictive machine learning algorithm based on natural language processing, which handles the data as an unstructured data set. A replication of a previous study where a prediction model based on a conventional multivariate machine learning using the same data is done and presented, for comparison. After evaluation, validation and interpretation, a set of variables were highlighted as early predictors of lung cancer. The performance of the proposed approach managed to match the performance of the conventional approach. This promising result opens for further development where such an approach can be used in clinical decision support systems. Future work could then involve other modalities, in a multimodal machine learning approach. / Tidig lungcancerdiagnostisering kan öka chanserna för överlevnad hos lungcancerpatienter, men att upptäcka lungcancer i ett tidigt stadie är en av de större utmaningarna för onkologer och sjukvården. Idag undersöks patienter med riskfaktorer baserat på rökning och ålder, dessa undersökningar sker med hjälp av bland annat medicinskt avbildningssystem, då oftast CT-bilder, vilket medför felaktiga och kostsamma diagnoser. Detta arbete föreslår en maskininlärninig algoritm baserad på Natural language processing, som genom analys och bearbetning av ostrukturerade data, av patienternas egna anamneser, kan prediktera lungcancer. Arbetet har genomfört en jämförelse med en konventionell maskininlärning algoritm baserat på en replikering av ett annat studie där samma data behandlades som strukturerad. Den föreslagna metoden har visat ett likartat resultat samt prestanda, och har identifierat riskfaktorer samt symptom för lungcancer. Detta arbete öppnar upp för en utveckling mot ett kliniskt användande i form av beslutsstödsystem, som även kan hantera elektriska hälsojournaler. Andra arbeten kan vidareutveckla metoden för att hantera andra varianter av data, så som medicinska bilder och biomarkörer, och genom det förbättra prestandan.
|
52 |
EXPLAINABLE AI METHODS FOR ENHANCING AI-BASED NETWORK INTRUSION DETECTION SYSTEMSOsvaldo Guilherme Arreche (18569509) 03 September 2024 (has links)
<p dir="ltr">In network security, the exponential growth of intrusions stimulates research toward developing advanced artificial intelligence (AI) techniques for intrusion detection systems (IDS). However, the reliance on AI for IDS presents challenges, including the performance variability of different AI models and the lack of explainability of their decisions, hindering the comprehension of outputs by human security analysts. Hence, this thesis proposes end-to-end explainable AI (XAI) frameworks tailored to enhance the understandability and performance of AI models in this context.</p><p><br></p><p dir="ltr">The first chapter benchmarks seven black-box AI models across one real-world and two benchmark network intrusion datasets, laying the foundation for subsequent analyses. Subsequent chapters delve into feature selection methods, recognizing their crucial role in enhancing IDS performance by extracting the most significant features for identifying anomalies in network security. Leveraging XAI techniques, novel feature selection methods are proposed, showcasing superior performance compared to traditional approaches.</p><p><br></p><p dir="ltr">Also, this thesis introduces an in-depth evaluation framework for black-box XAI-IDS, encompassing global and local scopes. Six evaluation metrics are analyzed, including descrip tive accuracy, sparsity, stability, efficiency, robustness, and completeness, providing insights into the limitations and strengths of current XAI methods.</p><p><br></p><p dir="ltr">Finally, the thesis addresses the potential of ensemble learning techniques in improving AI-based network intrusion detection by proposing a two-level ensemble learning framework comprising base learners and ensemble methods trained on input datasets to generate evalua tion metrics and new datasets for subsequent analysis. Feature selection is integrated into both levels, leveraging XAI-based and Information Gain-based techniques.</p><p><br></p><p dir="ltr">Holistically, this thesis offers a comprehensive approach to enhancing network intrusion detection through the synergy of AI, XAI, and ensemble learning techniques by providing open-source codes and insights into model performances. Therefore, it contributes to the security advancement of interpretable AI models for network security, empowering security analysts to make informed decisions in safeguarding networked systems.<br></p>
|
53 |
Evaluation of Explainable AI Techniques for Interpreting Machine Learning ModelsMuhammad, Al Jaber Al Shwali January 2024 (has links)
Denna undersökning utvärderar tillvägagångssätt inom "Explainable Artificial Intelligence" (XAI), särskilt "Local Interpretable Model Agnostic Explanations" (LIME) och 'Shapley Additive Explanations' (SHAP), genom att implementera dem i maskininlärningsmodeller som används inom cybersäkerhetens brandväggssystem. Prioriteten är att förbättra förståelsen av flervals klassificerings uppgift inom brandvägg hantering. I takt med att dagens AI-system utvecklas, sprids och tar en större roll i kritiska beslutsprocesser, blir transparens och förståelighet alltmer avgörande. Denna studie demonstrerar genom detaljerad analys och metodisk experimentell utvärdering hur SHAP och LIME belyser effekten av olika egenskaper på modellens prognoser, vilket i sin tur ökar tilliten till beslut som drivs av AI. Resultaten visar, hur funktioner såsom "Elapsed Time (sec)”, ”Network Address Translation” (NAT) källa och "Destination ports" ansenlig påverkar modellens resultat, vilket demonstreras genom analys av SHAP-värden. Dessutom erbjuder LIME detaljerade insikter i den lokala beslutsprocessen, vilket förbättrar vår förståelse av modellens beteende på individuell nivå. Studiet betonar betydelsen av XAI för att minska klyftan mellan AI operativa mekanismer och användarens förståelse, vilket är avgörande för felsökning samt för att säkerställa rättvisa, ansvar och etisk integritet i AI-implementeringar. Detta gör studiens implikationer betydande, då den ger en grund för framtida forskning om transparens i AI-system inom olika sektorer. / This study evaluates the explainable artificial intelligence (XAI) methods, specifically Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP), by applying them to machine learning models used in cybersecurity firewall systems and focusing on multi-class classification tasks within firewall management to improve their interpretability. As today's AI systems become more advanced, widespread, and involved in critical decision-making, transparency and interpretability have become essential. Through accurate analysis and systematic experimental evaluation, this study illustrates how SHAP and LIME clarify the impact of various features on model predictions, thereby leading to trust in AI-driven decisions. The results indicate that features such as Elapsed Time (sec), Network Address Translation (NAT) source, and Destination ports markedly affect model outcomes, as demonstrated by SHAP value analysis. Additionally, LIME offers detailed insights into the local decision making process, enhancing our understanding of model behavior at the individual level. The research underlines the importance of XAI in reducing the gap between AI operational mechanisms and user understanding, which is critical for debugging, and ensuring fairness, responsibility, and ethical integrity in AI implementations. This makes the implications of this study substantial, providing a basis for future research into the transparency of AI systems across different sectors.
|
54 |
Visual Transformers for 3D Medical Images Classification: Use-Case Neurodegenerative DisordersKhorramyar, Pooriya January 2022 (has links)
A Neurodegenerative Disease (ND) is progressive damage to brain neurons, which the human body cannot repair or replace. The well-known examples of such conditions are Dementia and Alzheimer’s Disease (AD), which affect millions of lives each year. Although conducting numerous researches, there are no effective treatments for the mentioned diseases today. However, early diagnosis is crucial in disease management. Diagnosing NDs is challenging for neurologists and requires years of training and experience. So, there has been a trend to harness the power of deep learning, including state-of-the-art Convolutional Neural Network (CNN), to assist doctors in diagnosing such conditions using brain scans. The CNN models lead to promising results comparable to experienced neurologists in their diagnosis. But, the advent of transformers in the Natural Language Processing (NLP) domain and their outstanding performance persuaded Computer Vision (CV) researchers to adapt them to solve various CV tasks in multiple areas, including the medical field. This research aims to develop Vision Transformer (ViT) models using Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset to classify NDs. More specifically, the models can classify three categories (Cognitively Normal (CN), Mild Cognitive Impairment (MCI), Alzheimer’s Disease (AD)) using brain Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) scans. Also, we take advantage of Automated Anatomical Labeling (AAL) brain atlas and attention maps to develop explainable models. We propose three ViTs, the best of which obtains an accuracy of 82% on the test dataset with the help of transfer learning. Also, we encode the AAL brain atlas information into the best performing ViT, so the model outputs the predicted label, the most critical region in its prediction, and overlaid attention map on the input scan with the crucial areas highlighted. Furthermore, we develop two CNN models with 2D and 3D convolutional kernels as baselines to classify NDs, which achieve accuracy of 77% and 73%, respectively, on the test dataset. We also conduct a study to find out the importance of brain regions and their combinations in classifying NDs using ViTs and the AAL brain atlas. / <p>This thesis was awarded a prize of 50,000 SEK by Getinge Sterilization for projects within Health Innovation.</p>
|
55 |
Development of a Machine Learning Survival Analysis Pipeline with Explainable AI for Analyzing the Complexity of ED Crowding : Using Real World Data collected from a Swedish Emergency Department / Utveckling av en maskin inlärningsbaserad överlevnadsanalys pipeline med förklarbar AI för att analysera komplexiteten av överbefolkning på akuten : Genom verklig data från en svensk akutmottagningHaraldsson, Tobias January 2023 (has links)
One of the biggest challenges in healthcare is Emergency Department (ED)crowding which creates high constraints on the whole healthcare system aswell as the resources within and can be the cause of many adverse events.Is is a well known problem were a lot of research has been done and a lotof solutions has been proposed, yet the problem still stands unsolved. Byanalysing Real-World Data (RWD), complex problems like ED crowding couldbe better understood. Currently very few applications of survival analysis hasbeen adopted for the use of production data in order to analyze the complexityof logistical problems. The aims for this thesis was to apply survival analysisthrough advanced Machine Learning (ML) models to RWD collected at aSwedish hospital too see how the Length Of Stay (LOS) until admission ordischarge were affected by different factors. This was done by formulating thecrowding in the ED for survival analysis through the use of the LOS as thetime and the decision regarding admission or discharge as the event in order tounfold the clinical complexity of the system and help impact clinical practiceand decision making.By formulating the research as time-to-event in combination with ML, thecomplexity and non linearity of the logistics in the ED is viewed from a timeperspective with the LOS acting as a Key Performance Indicator (KPI). Thisenables the researcher to look at the problem from a system perspective andshows how different features affect the time that the patient are processedin the ED, highlighting eventual problems and can therefore be useful forimproving clinical decision making. Five models: Cox Proportional Hazards(CPH), Random Survival Forests (RSF), Gradient Boosting (GB), ExtremeGradient Boosting (XGB) and DeepSurv were used and evaluated using theConcordance index (C-index) were GB were the best performing model witha C-index of 0.7825 showing that the ML models can perform better than thecommonly used CPH model. The models were then explained using SHapleyAdaptive exPlanations (SHAP) values were the importance of the featureswere shown together with how the different features impacted the LOS. TheSHAP also showed how the GB handled the non linearity of the features betterthan the CPH model. The five most important features impacting the LOS wereif the patient received a scan at the ED, if the visited and emergency room,age, triage level and the label indicating what type of medical team seemsmost fit for the patient. This is clinical information that could be implementedto reduce the crowding through correct decision making. These results show that ML based survival analysis models can be used for further investigationregarding the logistic challenges that healthcare faces and could be furtherused for data analysis with production data in similar cases. The ML survivalanalysis pipeline can also be used for further analysis and can act as a first stepin order to pinpoint important information in the data that could be interestingfor deeper data analysis, making the process more efficient. / En av de största utmaningarna inom vården är trängsel på akuten som skaparstora ansträngninar inom vårdsystemet samt på dess resurser och kan varaorsaken till många negativa händelser. Det är ett välkänt problem där mycketforskning har gjorts och många lösningar har föreslagits men problemetär fortfarande olöst. Genom att analysera verklig data så kan komplexaproblem som trängsel på akuten bli bättre förklarade. För närvarande harfå tillämpningar av överlevnadsanalys applicerats på produktionsdata för attanalysera komplexiteten av logistiska problem. Syftet med denna avhandlingvar att tillämpa överlevnadsanalys genom avancerade maskininlärningsmetoderpå verklig data insamlat på ett svenskt sjukhust för att se hur vistelsens längdför patienten fram till inläggning påverkades av olika faktorer. Detta gjordesgenom att applicera överlevnadsnanalys på trängsel på akuten genom attanvända vistelsens längd som tid och beslutet om intagning eller utskrivningsom händelsen. Detta för att kunna analysera systemets kliniska komplexitetoch bidra till att påverka klinisk praxis och beslutsfattande.Genom att formulera forskningsfrågan som en överlevnadsanalys i kombinationmed maskininlärning kan den komplexitet och icke-linjäritet som logistikenpå akuten innebär studeras genom ett tidsperspektiv där vistelsens längdfungerar som ett nyckeltal. Detta gör det möjligt för forskaren att ävenstudera problemet från ett systemperspektiv och visar hur olika egenskaperoch situationer påverkar den tid som patienten bearbetas på akuten. Detta uppmärksammar eventuella problem och kan därför vara användbart för attförbättra det kliniska beslutsfattandet. Fem olika modeller: CPH, RSF, GB,XGB och DeepSurv användes och utvärderades med hjälp av C-index där GBvar den bäst presterande modellen med ett C-index på 0.7825 vilket visar attmaskininlärningsmetoderna kan prestera bättre än den klassiska och vanligtförekommande CPH modellen. Modellerna förklarades sedan med hjälp utavSHAP värden där vikten utav de olika variablerna visades tillsammmans med deras påverkan. SHAP visade även att GB modellen hanterade icke-linjäriteten bättre än CPH modellen. De fem viktigaste variablerna som påverkade vistelsens längd till intagning var om patienten blev scannad påakutmottagningen, om de blev mottagna i ett akutrum, ålder, triagenivå ochvilket medicinskt team som ansågs bäst lämpat för patienten. Detta är kliniskinformation som skulle kunna implementeras genom beslutsfattande för attminska trängseln på akuten. Dessa resultat visar att maskininlärningsmetoderför överlevnadsanalys kan användas för vidare undersökning angående de logistiska utmaningar som sjukvården står inför och kan även användas ytterligareför datanalys med produktionsdata i liknande fall. Processen med överlevnadsanalys och ML kan även användas för vidare analys och kan agera som ett förstasteg för att framhäva viktig information i datan som skulle vara intressant fördjupare data analys. Detta skulle kunna göra processen mer effektiv.
|
56 |
Transparent ML Systems for the Process Industry : How can a recommendation system perceived as transparent be designed for experts in the process industry?Fikret, Eliz January 2023 (has links)
Process monitoring is a field that can greatly benefit from the adoption of machine learning solutions like recommendation systems. However, for domain experts to embrace these technologies within their work processes, clear explanations are crucial. Therefore, it is important to adopt user-centred methods for designing more transparent recommendation systems. This study explores this topic through a case study in the pulp and paper industry. By employing a user-centred and design-first adaptation of the question-driven design process, this study aims to uncover the explanation needs and requirements of industry experts, as well as formulate design visions and recommendations for transparent recommendation systems. The results of the study reveal five common explanation types that are valuable for domain experts while also highlighting limitations in previous studies on explanation types. Additionally, nine requirements are identified and utilised in the creation of a prototype, which domain experts evaluate. The evaluation process leads to the development of several design recommendations that can assist HCI researchers and designers in creating effective, transparent recommendation systems. Overall, this research contributes to the field of HCI by enhancing the understanding of transparent recommendation systems from a user-centred perspective.
|
57 |
CondBEHRT: A Conditional Probability Based Transformer for Modeling Medical OntologyLerjebo, Linus, Hägglund, Johannes January 2022 (has links)
In recent years the number of electronic healthcare records (EHRs)has increased rapidly. EHR represents a systematized collection of patient health information in a digital format. EHR systems maintain diagnoses, medications, procedures, and lab tests associated with the patients at each time they visit the hospital or care center. Since the information is available into multiple visits to hospitals or care centers, the EHR can be used to increasing quality care. This is especially useful when working with chronic diseases because they tend to evolve. There have been many deep learning methods that make use of these EHRs to solve different prediction tasks. Transformers have shown impressive results in many sequence-to-sequence tasks within natural language processing. This paper will mainly focus on using transformers, explicitly using a sequence of visits to do prediction tasks. The model presented in this paper is called CondBEHRT. Compared to previous state-of-art models, CondBEHRT will focus on using as much available data as possible to understand the patient’s trajectory. Based on all patients, the model will learn the medical ontology between diagnoses, medications, and procedures. The results show that the inferred medical ontology that has been learned can simulate reality quite well. Having the medical ontology also gives insights about the explainability of model decisions. We also compare the proposed model with the state-of-the-art methods using two different use cases; predicting the given codes in the next visit and predicting if the patient will be readmitted within 30 days.
|
58 |
Towards gradient faithfulness and beyondBuono, Vincenzo, Åkesson, Isak January 2023 (has links)
The riveting interplay of industrialization, informalization, and exponential technological growth of recent years has shifted the attention from classical machine learning techniques to more sophisticated deep learning approaches; yet its intrinsic black-box nature has been impeding its widespread adoption in transparency-critical operations. In this rapidly evolving landscape, where the symbiotic relationship between research and practical applications has never been more interwoven, the contribution of this paper is twofold: advancing gradient faithfulness of CAM methods and exploring new frontiers beyond it. In the first part, we theorize three novel gradient-based CAM formulations, aimed at replacing and superseding traditional Grad-CAM-based methods by tackling and addressing the intricately and persistent vanishing and saturating gradient problems. As a consequence, our work introduces novel enhancements to Grad-CAM that reshape the conventional gradient computation by incorporating a customized and adapted technique inspired by the well-established and provably Expected Gradients’ difference-from-reference approach. Our proposed techniques– Expected Grad-CAM, Expected Grad-CAM++and Guided Expected Grad-CAM– as they operate directly on the gradient computation, rather than the recombination of the weighing factors, are designed as a direct and seamless replacement for Grad-CAM and any posterior work built upon it. In the second part, we build on our prior proposition and devise a novel CAM method that produces both high-resolution and class-discriminative explanation without fusing other methods, while addressing the issues of both gradient and CAM methods altogether. Our last and most advanced proposition, Hyper Expected Grad-CAM, challenges the current state and formulation of visual explanation and faithfulness and produces a new type of hybrid saliencies that satisfy the notion of natural encoding and perceived resolution. By rethinking faithfulness and resolution is possible to generate saliencies which are more detailed, localized, and less noisy, but most importantly that are composed of only concepts that are encoded by the layerwise models’ understanding. Both contributions have been quantitatively and qualitatively compared and assessed in a 5 to 10 times larger evaluation study on the ILSVRC2012 dataset against nine of the most recent and performing CAM techniques across six metrics. Expected Grad-CAM outperformed not only the original formulation but also more advanced methods, resulting in the second-best explainer with an Ins-Del score of 0.56. Hyper Expected Grad-CAM provided remarkable results across each quantitative metric, yielding a 0.15 increase in insertion when compared to the highest-scoring explainer PolyCAM, totaling to an Ins-Del score of 0.72.
|
59 |
Is eXplainable AI suitable as a hypotheses generating tool for medical research? Comparing basic pathology annotation with heat maps to find outAdlersson, Albert January 2023 (has links)
Hypothesis testing has long been a formal and standardized process. Hypothesis generation, on the other hand, remains largely informal. This thesis assess whether eXplainable AI (XAI) can aid in the standardization of hypothesis generation through its utilization as a hypothesis generating tool for medical research. We produce XAI heat maps for a Convolutional Neural Network (CNN) trained to classify Microsatellite Instability (MSI) in colon and gastric cancer with four different XAI methods: Guided Backpropagation, VarGrad, Grad-CAM and Sobol Attribution. We then compare these heat maps with pathology annotations in order to look for differences to turn into new hypotheses. Our CNN successfully generates non-random XAI heat maps whilst achieving a validation accuracy of 85% and a validation AUC of 93% – as compared to others who achieve a AUC of 87%. Our results conclude that Guided Backpropagation and VarGrad are better at explaining high-level image features whereas Grad-CAM and Sobol Attribution are better at explaining low-level ones. This makes the two groups of XAI methods good complements to each other. Images of Microsatellite Insta- bility (MSI) with high differentiation are more difficult to analyse regardless of which XAI is used, probably due to exhibiting less regularity. Regardless of this drawback, our assessment is that XAI can be used as a useful hypotheses generating tool for research in medicine. Our results indicate that our CNN utilizes the same features as our basic pathology annotations when classifying MSI – with some additional features of basic pathology missing – features which we successfully are able to generate new hypotheses with.
|
60 |
Personalized fake news aware recommendation systemSallami, Dorsaf 08 1900 (has links)
In today’s world, where online news is so widespread, various methods have been developed
in order to provide users with personalized news recommendations. Wonderful accomplish ments have been made when it comes to providing readers with everything that could attract
their attention. While accuracy is critical in news recommendation, other factors, such as
diversity, novelty, and reliability, are essential in satisfying the readers’ satisfaction. In fact,
technological advancements bring additional challenges which might have a detrimental im pact on the news domain. Therefore, researchers need to consider the new threats in the
development of news recommendations. Fake news, in particular, is a hot topic in the media
today and a new threat to public safety.
This work presents a modularized system capable of recommending news to the user and
detecting fake news, all while helping users become more aware of this issue. First, we suggest
FANAR, FAke News Aware Recommender system, a modification to news recommendation
algorithms that removes untrustworthy persons from the candidate user’s neighbourhood.
To do this, we created a probabilistic model, the Beta Trust model, to calculate user rep utation. For the recommendation process, we employed Graph Neural Networks. Then,
we propose EXMULF, EXplainable MUltimodal Content-based Fake News Detection Sys tem. It is tasked with the veracity analysis of information based on its textual content and
the associated image, together with an Explainable AI (XAI) assistant that is tasked with
combating the spread of fake news. Finally, we try to raise awareness about fake news by
providing personalized alerts based on user reliability.
To fulfill the objective of this work, we build a new dataset named FNEWR. Our exper iments reveal that EXMULF outperforms 10 state-of-the-art fake news detection models in
terms of accuracy. It is also worth mentioning that FANAR , which takes into account vi sual information in news, outperforms competing approaches based only on textual content.
Furthermore, it reduces the amount of fake news found in the recommendations list / De nos jours, où les actualités en ligne sont si répandues, diverses méthodes ont été dé veloppées afin de fournir aux utilisateurs des recommandations d’actualités personnalisées.
De merveilleuses réalisations ont été faites lorsqu’il s’agit de fournir aux lecteurs tout ce qui
pourrait attirer leur attention. Bien que la précision soit essentielle dans la recommandation
d’actualités, d’autres facteurs, tels que la diversité, la nouveauté et la fiabilité, sont essentiels
pour satisfaire la satisfaction des lecteurs. En fait, les progrès technologiques apportent des
défis supplémentaires qui pourraient avoir un impact négatif sur le domaine de l’information.
Par conséquent, les chercheurs doivent tenir compte des nouvelles menaces lors de l’élabo ration de nouvelles recommandations. Les fausses nouvelles, en particulier, sont un sujet
brûlant dans les médias aujourd’hui et une nouvelle menace pour la sécurité publique.
Au vu des faits mentionnés ci-dessus, ce travail présente un système modulaire capable
de détecter les fausses nouvelles, de recommander des nouvelles à l’utilisateur et de les aider
à être plus conscients de ce problème. Tout d’abord, nous suggérons FANAR, FAke News
Aware Recommender system, une modification d’algorithme de recommandation d’actuali tés qui élimine les personnes non fiables du voisinage de l’utilisateur candidat. A cette fin,
nous avons créé un modèle probabiliste, Beta Trust Model, pour calculer la réputation des
utilisateurs. Pour le processus de recommandation, nous avons utilisé Graph Neural Net works. Ensuite, nous proposons EXMULF, EXplainable MUltimodal Content-based Fake
News Detection System. Il s’agit de l’analyse de la véracité de l’information basée sur son
contenu textuel et l’image associée, ainsi qu’un assistant d’intelligence artificielle Explicable
(XAI) pour lutter contre la diffusion de fake news. Enfin, nous essayons de sensibiliser aux
fake news en fournissant des alertes personnalisées basées sur le profil des utilisateurs.
Pour remplir l’objectif de ce travail, nous construisons un nouveau jeu de données nommé
FNEWR. Nos résultats expérimentaux montrent qu’EXMULF surpasse 10 modèles de pointe
de détection de fausses nouvelles en termes de précision. Aussi, FANAR qui prend en compte
les informations visuelles dans les actualités, surpasse les approches concurrentes basées
uniquement sur le contenu textuel. De plus, il permet de réduire le nombre de fausses
nouvelles dans la liste des recommandations.
|
Page generated in 0.0487 seconds