Gradient flow in parameter space is equivalent to linear interpolation in output space

التفاصيل البيبلوغرافية
العنوان: Gradient flow in parameter space is equivalent to linear interpolation in output space
المؤلفون: Chen, Thomas, Ewald, Patrícia Muñoz
سنة النشر: 2024
المجموعة: Computer Science
Mathematics
Mathematical Physics
Statistics
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Artificial Intelligence, Mathematical Physics, Mathematics - Optimization and Control, Statistics - Machine Learning, 62M45, 37C10
الوصف: We prove that the usual gradient flow in parameter space that underlies many training algorithms for neural networks in deep learning can be continuously deformed into an adapted gradient flow which yields (constrained) Euclidean gradient flow in output space. Moreover, if the Jacobian of the outputs with respect to the parameters is full rank (for fixed training data), then the time variable can be reparametrized so that the resulting flow is simply linear interpolation, and a global minimum can be achieved.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2408.01517
رقم الأكسشن: edsarx.2408.01517
قاعدة البيانات: arXiv