دورية أكاديمية

Explainable Deep Learning for False Information Identification: An Argumentation Theory Approach

التفاصيل البيبلوغرافية
العنوان: Explainable Deep Learning for False Information Identification: An Argumentation Theory Approach
المؤلفون: Kyuhan Lee, Sudha Ram
المصدر: INFORMS, Information Systems Research. 35(2):890-907
سنة النشر: 2024
الوصف: In today’s world, where online information is proliferating in an unprecedented way, a significant challenge is whether to believe the information we encounter. Ironically, this flood of information provides us with an opportunity to combat false claims by understanding their nature. That is, with the help of machine learning, it is now possible to effectively capture the characteristics of false information by analyzing massive amounts of false claims published online. These methods, however, have neglected the nature of human argumentation, delegating the process of making inferences of the truth to the black box of neural networks. This has created several challenges (namely latent text representations containing entangled syntactic and semantic information, an irrelevant part of text being considered when abstracting text as a latent vector, and counterintuitive model explanation). To resolve these issues, based on Toulmin’s model of argumentation, we propose a computational framework that helps machine learning for false information identification (FII) understand the connection between a claim (whose veracity needs to be verified) and evidence (which contains information to support or refute the claim). Specifically, we first build a word network of a claim and evidence reflecting their syntaxes and convert it into a signed word network using their semantics. The structural balance of this word network is then calculated as a proxy metric to determine the consistency between a claim and evidence. The consistency level is fed into machine learning as input, providing information for verifying claim veracity and explaining the model’s decision making. The two experiments for testing model performance and explainability reveal that our framework shows stronger performance and better explainability, outperforming cutting-edge methods and presenting positive effects on human task performance, trust in algorithms, and confidence in decision making. Our results shed
نوع الوثيقة: redif-article
اللغة: English
DOI: 10.1287/isre.2020.0097
الإتاحة: https://ideas.repec.org/a/inm/orisre/v35y2024i2p890-907.html
رقم الأكسشن: edsrep.a.inm.orisre.v35y2024i2p890.907
قاعدة البيانات: RePEc