Going Forward-Forward in Distributed Deep Learning

التفاصيل البيبلوغرافية
العنوان: Going Forward-Forward in Distributed Deep Learning
المؤلفون: Aktemur, Ege, Zorlutuna, Ege, Bilgili, Kaan, Bok, Tacettin Emre, Yanikoglu, Berrin, Mutluergil, Suha Orhun
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Distributed, Parallel, and Cluster Computing
الوصف: We introduce a new approach in distributed deep learning, utilizing Geoffrey Hinton's Forward-Forward (FF) algorithm to speed up the training of neural networks in distributed computing environments. Unlike traditional methods that rely on forward and backward passes, the FF algorithm employs a dual forward pass strategy, significantly diverging from the conventional backpropagation process. This novel method aligns more closely with the human brain's processing mechanisms, potentially offering a more efficient and biologically plausible approach to neural network training. Our research explores different implementations of the FF algorithm in distributed settings, to explore its capacity for parallelization. While the original FF algorithm focused on its ability to match the performance of the backpropagation algorithm, the parallelism aims to reduce training times and resource consumption, thereby addressing the long training times associated with the training of deep neural networks. Our evaluation shows a 3.75 times speed up on MNIST dataset without compromising accuracy when training a four-layer network with four compute nodes. The integration of the FF algorithm into distributed deep learning represents a significant step forward in the field, potentially revolutionizing the way neural networks are trained in distributed environments.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.08573
رقم الأكسشن: edsarx.2404.08573
قاعدة البيانات: arXiv