Ofir Press

Shortformer: Better Language Modeling using Shorter Inputs

Everyone is trying to improve language models by having them look at more words, we show that we can improve them by giving them less words

While progress in transformer language modeling is being driven by increasing input length, we find both perplexity and efficiency gains through two different methods that decrease input length. First, we show that initially training a model on short subsequences before moving on to longer ones both reduces overall training time and, surprisingly, substantially improves perplexity. Second, we show how to improve the efficiency of recurrence methods in transformers, which let models condition on previously processed tokens when generating sequences that exceed the maximal length the transformer can handle at once. Existing methods require computationally expensive relative position embeddings; we introduce a simple alternative of adding absolute position embeddings to queries and keys instead of to word embeddings, which efficiently produces superior results. We show that these recurrent models also benefit from short input lengths. Combining these techniques speeds training by a factor of 1.65, reduces memory usage, and substantially improves perplexity on WikiText-103, without adding any parameters.


Ofir is a PhD student at the University of Washington advised by Noah Smith, working on better understanding the basic building blocks of neural NLP, so that we can make them faster, smaller and more accurate.

Presentation Materials

Talk Video