Do language models plan ahead for future tokens?

التفاصيل البيبلوغرافية
العنوان: Do language models plan ahead for future tokens?
المؤلفون: Wu, Wilson, Morris, John X., Levine, Lionel
سنة النشر: 2024
المجموعة: Computer Science
مصطلحات موضوعية: Computer Science - Machine Learning, Computer Science - Computation and Language
الوصف: Do transformers "think ahead" during inference at a given position? It is known transformers prepare information in the hidden states of the forward pass at $t$ that is then used in future forward passes $t+\tau$. We posit two explanations for this phenomenon: pre-caching, in which off-diagonal gradient terms present in training result in the model computing features at $t$ irrelevant to the present inference task but useful for the future, and breadcrumbs, in which features most relevant to time step $t$ are already the same as those that would most benefit inference at time $t+\tau$. We test these hypotheses by training language models without propagating gradients to past timesteps, a scheme we formalize as myopic training. In a synthetic data setting, we find clear evidence for pre-caching. In the autoregressive language modeling setting, our experiments are more suggestive of the breadcrumbs hypothesis.
نوع الوثيقة: Working Paper
URL الوصول: http://arxiv.org/abs/2404.00859
رقم الأكسشن: edsarx.2404.00859
قاعدة البيانات: arXiv