A Warm Start and a Clean Crawled Corpus -- A Recipe for Good Language Models

التفاصيل البيبلوغرافية
العنوان: A Warm Start and a Clean Crawled Corpus -- A Recipe for Good Language Models
المؤلفون: Snæbjarnarson, Vésteinn, Símonarson, Haukur Barri, Ragnarsson, Pétur Orri, Ingólfsdóttir, Svanhvít Lilja, Jónsson, Haukur Páll, Þorsteinsson, Vilhjálmur, Einarsson, Hafsteinn
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: We train several language models for Icelandic, including IceBERT, that achieve state-of-the-art performance in a variety of downstream tasks, including part-of-speech tagging, named entity recognition, grammatical error detection and constituency parsing. To train the models we introduce a new corpus of Icelandic text, the Icelandic Common Crawl Corpus (IC3), a collection of high quality texts found online by targeting the Icelandic top-level-domain (TLD). Several other public data sources are also collected for a total of 16GB of Icelandic text. To enhance the evaluation of model performance and to raise the bar in baselines for Icelandic, we translate and adapt the WinoGrande dataset for co-reference resolution. Through these efforts we demonstrate that a properly cleaned crawled corpus is sufficient to achieve state-of-the-art results in NLP applications for low to medium resource languages, by comparison with models trained on a curated corpus. We further show that initializing models using existing multilingual models can lead to state-of-the-art results for some downstream tasks.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2201.05601
رقم الأكسشن: edsarx.2201.05601
قاعدة البيانات: arXiv