Quantized Distributed Training of Large Models with Convergence Guarantees

التفاصيل البيبلوغرافية
العنوان: Quantized Distributed Training of Large Models with Convergence Guarantees
المؤلفون: Markov, Ilia, Vladu, Adrian, Guo, Qi, Alistarh, Dan
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning
الوصف: Communication-reduction techniques are a popular way to improve scalability in data-parallel training of deep neural networks (DNNs). The recent emergence of large language models such as GPT has created the need for new approaches to exploit data-parallelism. Among these, fully-sharded data parallel (FSDP) training is highly popular, yet it still encounters scalability bottlenecks. One reason is that applying compression techniques to FSDP is challenging: as the vast majority of the communication involves the model's weights, direct compression alters convergence and leads to accuracy loss. We present QSDP, a variant of FSDP which supports both gradient and weight quantization with theoretical guarantees, is simple to implement and has essentially no overheads. To derive QSDP we prove that a natural modification of SGD achieves convergence even when we only maintain quantized weights, and thus the domain over which we train consists of quantized points and is, therefore, highly non-convex. We validate this approach by training GPT-family models with up to 1.3 billion parameters on a multi-node cluster. Experiments show that QSDP preserves model accuracy, while completely removing the communication bottlenecks of FSDP, providing end-to-end speedups of up to 2.2x.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2302.02390
رقم الأكسشن: edsarx.2302.02390
قاعدة البيانات: arXiv