تقرير
Uncovering Latent Human Wellbeing in Language Model Embeddings
العنوان: | Uncovering Latent Human Wellbeing in Language Model Embeddings |
---|---|
المؤلفون: | Freire, Pedro, Tan, ChengCheng, Gleave, Adam, Hendrycks, Dan, Emmons, Scott |
سنة النشر: | 2024 |
المجموعة: | Computer Science |
مصطلحات موضوعية: | Computer Science - Computation and Language, Computer Science - Artificial Intelligence, Computer Science - Machine Learning, I.2.7 |
الوصف: | Do language models implicitly learn a concept of human wellbeing? We explore this through the ETHICS Utilitarianism task, assessing if scaling enhances pretrained models' representations. Our initial finding reveals that, without any prompt engineering or finetuning, the leading principal component from OpenAI's text-embedding-ada-002 achieves 73.9% accuracy. This closely matches the 74.6% of BERT-large finetuned on the entire ETHICS dataset, suggesting pretraining conveys some understanding about human wellbeing. Next, we consider four language model families, observing how Utilitarianism accuracy varies with increased parameters. We find performance is nondecreasing with increased model size when using sufficient numbers of principal components. Comment: 10 pages, 5 figures, 1 table |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2402.11777 |
رقم الأكسشن: | edsarx.2402.11777 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |