Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation

التفاصيل البيبلوغرافية
العنوان: Multiplexed gradient descent: Fast online training of modern datasets on hardware neural networks without backpropagation
المؤلفون: McCaughan, Adam N., Oripov, Bakhrom G., Ganesh, Natesh, Nam, Sae Woo, Dienstfrey, Andrew, Buckley, Sonia M.
المصدر: APL Machine Learning 1, 026118 (2023)
سنة النشر: 2023
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Neural and Evolutionary Computing
الوصف: We present multiplexed gradient descent (MGD), a gradient descent framework designed to easily train analog or digital neural networks in hardware. MGD utilizes zero-order optimization techniques for online training of hardware neural networks. We demonstrate its ability to train neural networks on modern machine learning datasets, including CIFAR-10 and Fashion-MNIST, and compare its performance to backpropagation. Assuming realistic timescales and hardware parameters, our results indicate that these optimization techniques can train a network on emerging hardware platforms orders of magnitude faster than the wall-clock time of training via backpropagation on a standard GPU, even in the presence of imperfect weight updates or device-to-device variations in the hardware. We additionally describe how it can be applied to existing hardware as part of chip-in-the-loop training, or integrated directly at the hardware level. Crucially, the MGD framework is highly flexible, and its gradient descent process can be optimized to compensate for specific hardware limitations such as slow parameter-update speeds or limited input bandwidth.
نوع الوثيقة: Working Paper
DOI: 10.1063/5.0157645
URL الوصول: http://arxiv.org/abs/2303.03986
رقم الأكسشن: edsarx.2303.03986
قاعدة البيانات: arXiv