Scaling Granite Code Models to 128K Context

التفاصيل البيبلوغرافية
العنوان: Scaling Granite Code Models to 128K Context
المؤلفون: Stallone, Matt, Saxena, Vaibhav, Karlinsky, Leonid, McGinn, Bridget, Bula, Tim, Mishra, Mayank, Soria, Adriana Meza, Zhang, Gaoyuan, Prasad, Aditya, Shen, Yikang, Surendran, Saptha, Guttula, Shanmukha, Patel, Hima, Selvam, Parameswaran, Dang, Xuan-Hong, Koyfman, Yan, Sood, Atin, Feris, Rogerio, Desai, Nirmit, Cox, David D., Puri, Ruchir, Panda, Rameswar
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Artificial Intelligence, Computer Science - Computation and Language, Computer Science - Software Engineering
الوصف: This paper introduces long-context Granite code models that support effective context windows of up to 128K tokens. Our solution for scaling context length of Granite 3B/8B code models from 2K/4K to 128K consists of a light-weight continual pretraining by gradually increasing its RoPE base frequency with repository-level file packing and length-upsampled long-context data. Additionally, we also release instruction-tuned models with long-context support which are derived by further finetuning the long context base models on a mix of permissively licensed short and long-context instruction-response pairs. While comparing to the original short-context Granite code models, our long-context models achieve significant improvements on long-context tasks without any noticeable performance degradation on regular code completion benchmarks (e.g., HumanEval). We release all our long-context Granite code models under an Apache 2.0 license for both research and commercial use.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2407.13739
رقم الأكسشن: edsarx.2407.13739
قاعدة البيانات: arXiv