Local Kernel Renormalization as a mechanism for feature learning in overparametrized Convolutional Neural Networks

التفاصيل البيبلوغرافية
العنوان: Local Kernel Renormalization as a mechanism for feature learning in overparametrized Convolutional Neural Networks
المؤلفون: Aiudi, R., Pacelli, R., Vezzani, A., Burioni, R., Rotondo, P.
سنة النشر: 2023
المجموعة: Computer Science
Condensed Matter
مصطلحات موضوعية: Computer Science - Machine Learning, Condensed Matter - Disordered Systems and Neural Networks
الوصف: Feature learning, or the ability of deep neural networks to automatically learn relevant features from raw data, underlies their exceptional capability to solve complex tasks. However, feature learning seems to be realized in different ways in fully-connected (FC) or convolutional architectures (CNNs). Empirical evidence shows that FC neural networks in the infinite-width limit eventually outperform their finite-width counterparts. Since the kernel that describes infinite-width networks does not evolve during training, whatever form of feature learning occurs in deep FC architectures is not very helpful in improving generalization. On the other hand, state-of-the-art architectures with convolutional layers achieve optimal performances in the finite-width regime, suggesting that an effective form of feature learning emerges in this case. In this work, we present a simple theoretical framework that provides a rationale for these differences, in one hidden layer networks. First, we show that the generalization performance of a finite-width FC network can be obtained by an infinite-width network, with a suitable choice of the Gaussian priors. Second, we derive a finite-width effective action for an architecture with one convolutional hidden layer and compare it with the result available for FC networks. Remarkably, we identify a completely different form of kernel renormalization: whereas the kernel of the FC architecture is just globally renormalized by a single scalar parameter, the CNN kernel undergoes a local renormalization, meaning that the network can select the local components that will contribute to the final prediction in a data-dependent way. This finding highlights a simple mechanism for feature learning that can take place in overparametrized shallow CNNs, but not in shallow FC architectures or in locally connected neural networks without weight sharing.
Comment: 22 pages, 5 figures, 2 tables. Comments are welcome
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2307.11807
رقم الأكسشن: edsarx.2307.11807
قاعدة البيانات: arXiv