تقرير
Sparsifying Transformer Models with Trainable Representation Pooling
العنوان: | Sparsifying Transformer Models with Trainable Representation Pooling |
---|---|
المؤلفون: | Pietruszka, Michał, Borchmann, Łukasz, Garncarek, Łukasz |
سنة النشر: | 2020 |
المجموعة: | Computer Science |
مصطلحات موضوعية: | Computer Science - Computation and Language, Computer Science - Machine Learning |
الوصف: | We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-$k$ operator. Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling, we can retain its top quality, while being $1.8\times$ faster during training, $4.5\times$ faster during inference, and up to $13\times$ more computationally efficient in the decoder. Comment: Accepted at ACL 2022 |
نوع الوثيقة: | Working Paper |
URL الوصول: | http://arxiv.org/abs/2009.05169 |
رقم الأكسشن: | edsarx.2009.05169 |
قاعدة البيانات: | arXiv |
الوصف غير متاح. |