From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models

التفاصيل البيبلوغرافية
العنوان: From Bytes to Biases: Investigating the Cultural Self-Perception of Large Language Models
المؤلفون: Messner, Wolfgang, Greene, Tatum, Matalone, Josephine
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language, Computer Science - Artificial Intelligence, Computer Science - Human-Computer Interaction, Computer Science - Information Retrieval, Computer Science - Machine Learning
الوصف: Large language models (LLMs) are able to engage in natural-sounding conversations with humans, showcasing unprecedented capabilities for information retrieval and automated decision support. They have disrupted human-technology interaction and the way businesses operate. However, technologies based on generative artificial intelligence (GenAI) are known to hallucinate, misinform, and display biases introduced by the massive datasets on which they are trained. Existing research indicates that humans may unconsciously internalize these biases, which can persist even after they stop using the programs. This study explores the cultural self-perception of LLMs by prompting ChatGPT (OpenAI) and Bard (Google) with value questions derived from the GLOBE project. The findings reveal that their cultural self-perception is most closely aligned with the values of English-speaking countries and countries characterized by sustained economic competitiveness. Recognizing the cultural biases of LLMs and understanding how they work is crucial for all members of society because one does not want the black box of artificial intelligence to perpetuate bias in humans, who might, in turn, inadvertently create and train even more biased algorithms.
Comment: 20 pages, 3 tables, 4 figures; Online Supplement: 10 pages, 5 tables, 3 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2312.17256
رقم الأكسشن: edsarx.2312.17256
قاعدة البيانات: arXiv