Adaptive Top-K in SGD for Communication-Efficient Distributed Learning

التفاصيل البيبلوغرافية
العنوان: Adaptive Top-K in SGD for Communication-Efficient Distributed Learning
المؤلفون: Ruan, Mengzhe, Yan, Guangfeng, Xiao, Yuanzhang, Song, Linqi, Xu, Weitao
سنة النشر: 2022
المجموعة: Computer Science
Mathematics
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Distributed, Parallel, and Cluster Computing, Mathematics - Optimization and Control
الوصف: Distributed stochastic gradient descent (SGD) with gradient compression has become a popular communication-efficient solution for accelerating distributed learning. One commonly used method for gradient compression is Top-K sparsification, which sparsifies the gradients by a fixed degree during model training. However, there has been a lack of an adaptive approach to adjust the sparsification degree to maximize the potential of the model's performance or training speed. This paper proposes a novel adaptive Top-K in SGD framework that enables an adaptive degree of sparsification for each gradient descent step to optimize the convergence performance by balancing the trade-off between communication cost and convergence error. Firstly, an upper bound of convergence error is derived for the adaptive sparsification scheme and the loss function. Secondly, an algorithm is designed to minimize the convergence error under the communication cost constraints. Finally, numerical results on the MNIST and CIFAR-10 datasets demonstrate that the proposed adaptive Top-K algorithm in SGD achieves a significantly better convergence rate compared to state-of-the-art methods, even after considering error compensation.
Comment: 6 pages, 10 figures, has been accepted by GlobeCom 2023
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2210.13532
رقم الأكسشن: edsarx.2210.13532
قاعدة البيانات: arXiv