Bag of Lies: Robustness in Continuous Pre-training BERT

التفاصيل البيبلوغرافية
العنوان: Bag of Lies: Robustness in Continuous Pre-training BERT
المؤلفون: Gevers, Ine, Daelemans, Walter
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computation and Language
الوصف: This study aims to acquire more insights into the continuous pre-training phase of BERT regarding entity knowledge, using the COVID-19 pandemic as a case study. Since the pandemic emerged after the last update of BERT's pre-training data, the model has little to no entity knowledge about COVID-19. Using continuous pre-training, we control what entity knowledge is available to the model. We compare the baseline BERT model with the further pre-trained variants on the fact-checking benchmark Check-COVID. To test the robustness of continuous pre-training, we experiment with several adversarial methods to manipulate the input data, such as training on misinformation and shuffling the word order until the input becomes nonsensical. Surprisingly, our findings reveal that these methods do not degrade, and sometimes even improve, the model's downstream performance. This suggests that continuous pre-training of BERT is robust against misinformation. Furthermore, we are releasing a new dataset, consisting of original texts from academic publications in the LitCovid repository and their AI-generated false counterparts.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2406.09967
رقم الأكسشن: edsarx.2406.09967
قاعدة البيانات: arXiv