Return to search

Explainable Neural Claim Verification Using Rationalization

The dependence on Natural Language Processing (NLP) systems has grown significantly in the last decade. Recent advances in deep learning have enabled language models to generate high-quality text at the same level as human-written text. If this growth continues, it can potentially lead to increased misinformation, which is a significant challenge. Although claim verification techniques exist, they lack proper explainability. Numerical scores such as Attention and Lime and visualization techniques such as saliency heat maps are insufficient because they require specialized knowledge. It is inaccessible and challenging for the nonexpert to understand black-box NLP systems. We propose a novel approach called, ExClaim for explainable claim verification using NLP rationalization. We demonstrate that our approach can predict a verdict for the claim but also justify and rationalize its output as a natural language explanation (NLE). We extensively evaluate the system using statistical and Explainable AI (XAI) metrics to ensure the outcomes are valid, verified, and trustworthy to help reinforce the human-AI trust. We propose a new subfield in XAI called Rational AI (RAI) to improve research progress on rationalization and NLE-based explainability techniques. Ensuring that claim verification systems are assured and explainable is a step towards trustworthy AI systems and ultimately helps mitigate misinformation. / Master of Science / The dependence on Natural Language Processing (NLP) systems has grown significantly in the last decade. Recent advances in deep learning have enabled text generation models to generate high-quality text that is at the same level as human-written text. If this growth continues, it can potentially lead to increased misinformation, which is a major societal challenge. Although claim verification techniques exist, they lack proper explainability. It is difficult for the average user to understand the model's decision-making process. Numerical scores and visualization techniques exist to provide explainability, but they are insufficient because they require specialized domain knowledge. This makes it inaccessible and challenging for the nonexpert to understand black-box NLP systems. We propose a novel approach called, ExClaim for explainable claim verification using NLP rationalization. We demonstrate that our approach can predict a verdict for the claim but also justify and rationalize its output as a natural language explanation (NLE). We extensively evaluate the system using statistical and Explainable AI (XAI) metrics to ensure the outcomes are valid, verified, and trustworthy to help reinforce the human-AI trust. We propose a new subfield in XAI called Rational AI (RAI) to improve research progress on rationalization and NLE-based explainability techniques. Ensuring that claim verification systems are assured and explainable is a step towards trustworthy AI systems and ultimately helps mitigate misinformation.

Identiferoai:union.ndltd.org:VTETD/oai:vtechworks.lib.vt.edu:10919/110790
Date15 June 2022
CreatorsGurrapu, Sai Charan
ContributorsComputer Science, Batarseh, Feras A., Huang, Lifu, Freeman, Laura J., Lourentzou, Ismini
PublisherVirginia Tech
Source SetsVirginia Tech Theses and Dissertation
LanguageEnglish
Detected LanguageEnglish
TypeThesis
FormatETD, application/pdf
RightsCreative Commons Attribution-ShareAlike 4.0 International, http://creativecommons.org/licenses/by-sa/4.0/

Page generated in 0.0024 seconds