Communication Optimization for Distributed Training: Architecture, Advances, and Opportunities

التفاصيل البيبلوغرافية
العنوان: Communication Optimization for Distributed Training: Architecture, Advances, and Opportunities
المؤلفون: Wei, Yunze, Hu, Tianshuo, Liang, Cong, Cui, Yong
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Distributed, Parallel, and Cluster Computing, Computer Science - Machine Learning
الوصف: The past few years have witnessed the flourishing of large-scale deep neural network models with ever-growing parameter numbers. Training such large-scale models typically requires massive memory and computing resources, necessitating distributed training. As GPU performance has rapidly evolved in recent years, computation time has shrunk, making communication a larger portion of the overall training time. Consequently, optimizing communication for distributed training has become crucial. In this article, we briefly introduce the general architecture of distributed deep neural network training and analyze relationships among Parallelization Strategy, Collective Communication Library, and Network from the perspective of communication optimization, which forms a three-layer paradigm. We then review current representative research advances within this three-layer paradigm. We find that layers in the current three-layer paradigm are relatively independent and there is a rich design space for cross-layer collaborative optimization in distributed training scenarios. Therefore, we advocate "Vertical" and "Horizontal" co-designs which extend the three-layer paradigm to a five-layer paradigm. We also advocate "Intra-Inter" and "Host-Net" co-designs to further utilize the potential of heterogeneous resources. We hope this article can shed some light on future research on communication optimization for distributed training.
Comment: 8 pages, 5 figures
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2403.07585
رقم الأكسشن: edsarx.2403.07585
قاعدة البيانات: arXiv