Training and inference of large language models using 8-bit floating point

التفاصيل البيبلوغرافية
العنوان: Training and inference of large language models using 8-bit floating point
المؤلفون: Perez, Sergio P., Zhang, Yan, Briggs, James, Blake, Charlie, Levy-Kramer, Josh, Balanca, Paul, Luschi, Carlo, Barlow, Stephen, Fitzgibbon, Andrew William
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Hardware Architecture, Computer Science - Computation and Language, Computer Science - Emerging Technologies, Computer Science - Performance, I.2.7, B.2.4
الوصف: FP8 formats are gaining popularity to boost the computational efficiency for training and inference of large deep learning models. Their main challenge is that a careful choice of scaling is needed to prevent degradation due to the reduced dynamic range compared to higher-precision formats. Although there exists ample literature about selecting such scalings for INT formats, this critical aspect has yet to be addressed for FP8. This paper presents a methodology to select the scalings for FP8 linear layers, based on dynamically updating per-tensor scales for the weights, gradients and activations. We apply this methodology to train and validate large language models of the type of GPT and Llama 2 using FP8, for model sizes ranging from 111M to 70B. To facilitate the understanding of the FP8 dynamics, our results are accompanied by plots of the per-tensor scale distribution for weights, activations and gradients during both training and inference.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2309.17224
رقم الأكسشن: edsarx.2309.17224
قاعدة البيانات: arXiv