Faster Transformer Decoding: N-gram Masked Self-Attention

التفاصيل البيبلوغرافية
العنوان: Faster Transformer Decoding: N-gram Masked Self-Attention
المؤلفون: Chelba, Ciprian, Chen, Mia, Bapna, Ankur, Shazeer, Noam
سنة النشر: 2020
المجموعة: Computer Science
Statistics
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Computation and Language, Statistics - Machine Learning
الوصف: Motivated by the fact that most of the information relevant to the prediction of target tokens is drawn from the source sentence $S=s_1, \ldots, s_S$, we propose truncating the target-side window used for computing self-attention by making an $N$-gram assumption. Experiments on WMT EnDe and EnFr data sets show that the $N$-gram masked self-attention model loses very little in BLEU score for $N$ values in the range $4, \ldots, 8$, depending on the task.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2001.04589
رقم الأكسشن: edsarx.2001.04589
قاعدة البيانات: arXiv