A Generic Shared Attention Mechanism for Various Backbone Neural Networks

التفاصيل البيبلوغرافية
العنوان: A Generic Shared Attention Mechanism for Various Backbone Neural Networks
المؤلفون: Huang, Zhongzhan, Liang, Senwei, Liang, Mingfu, Lin, Liang
سنة النشر: 2022
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Computer Vision and Pattern Recognition, Computer Science - Artificial Intelligence
الوصف: The self-attention mechanism has emerged as a critical component for improving the performance of various backbone neural networks. However, current mainstream approaches individually incorporate newly designed self-attention modules (SAMs) into each layer of the network for granted without fully exploiting their parameters' potential. This leads to suboptimal performance and increased parameter consumption as the network depth increases. To improve this paradigm, in this paper, we first present a counterintuitive but inherent phenomenon: SAMs tend to produce strongly correlated attention maps across different layers, with an average Pearson correlation coefficient of up to 0.85. Inspired by this inherent observation, we propose Dense-and-Implicit Attention (DIA), which directly shares SAMs across layers and employs a long short-term memory module to calibrate and bridge the highly correlated attention maps of different layers, thus improving the parameter utilization efficiency of SAMs. This design of DIA is also consistent with the neural network's dynamical system perspective. Through extensive experiments, we demonstrate that our simple yet effective DIA can consistently enhance various network backbones, including ResNet, Transformer, and UNet, across tasks such as image classification, object detection, and image generation using diffusion models.
Comment: Work in progress. arXiv admin note: text overlap with arXiv:1905.10671
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2210.16101
رقم الأكسشن: edsarx.2210.16101
قاعدة البيانات: arXiv