Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception

التفاصيل البيبلوغرافية
العنوان: Investigating Bias in LLM-Based Bias Detection: Disparities between LLMs and Human Perception
المؤلفون: Lin, Luyang, Wang, Lingzhi, Guo, Jinsong, Wong, Kam-Fai
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computers and Society
الوصف: The pervasive spread of misinformation and disinformation in social media underscores the critical importance of detecting media bias. While robust Large Language Models (LLMs) have emerged as foundational tools for bias prediction, concerns about inherent biases within these models persist. In this work, we investigate the presence and nature of bias within LLMs and its consequential impact on media bias detection. Departing from conventional approaches that focus solely on bias detection in media content, we delve into biases within the LLM systems themselves. Through meticulous examination, we probe whether LLMs exhibit biases, particularly in political bias prediction and text continuation tasks. Additionally, we explore bias across diverse topics, aiming to uncover nuanced variations in bias expression within the LLM framework. Importantly, we propose debiasing strategies, including prompt engineering and model fine-tuning. Extensive analysis of bias tendencies across different LLMs sheds light on the broader landscape of bias propagation in language models. This study advances our understanding of LLM bias, offering critical insights into its implications for bias detection tasks and paving the way for more robust and equitable AI systems
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.14896
رقم الأكسشن: edsarx.2403.14896
قاعدة البيانات: arXiv