Universal scaling laws in the gradient descent training of neural networks

التفاصيل البيبلوغرافية
العنوان: Universal scaling laws in the gradient descent training of neural networks
المؤلفون: Velikanov, Maksim, Yarotsky, Dmitry
سنة النشر: 2021
المجموعة: Computer Science
Mathematics
Statistics
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Neural and Evolutionary Computing, Mathematics - Optimization and Control, Statistics - Machine Learning
الوصف: Current theoretical results on optimization trajectories of neural networks trained by gradient descent typically have the form of rigorous but potentially loose bounds on the loss values. In the present work we take a different approach and show that the learning trajectory can be characterized by an explicit asymptotic at large training times. Specifically, the leading term in the asymptotic expansion of the loss behaves as a power law $L(t) \sim t^{-\xi}$ with exponent $\xi$ expressed only through the data dimension, the smoothness of the activation function, and the class of function being approximated. Our results are based on spectral analysis of the integral operator representing the linearized evolution of a large network trained on the expected loss. Importantly, the techniques we employ do not require specific form of a data distribution, for example Gaussian, thus making our findings sufficiently universal.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2105.00507
رقم الأكسشن: edsarx.2105.00507
قاعدة البيانات: arXiv