At this week’s MCQLL meeting on Tuesday, January 25 at 3:00-4:00, Jacob Louis Hoover will give a talk titled ‘Processing time is a superlinear function of surprisal.’ If you’d like to attend, please register for the Zoom meeting here if you haven’t already
The incremental processing difficulty of a linguistic item is related to its predictability. Surprisal theory (Hale, 2001; Levy, 2008) posits that the processing cost of a word in context is a linear function of its surprisal. This prediction has received considerable attention and broad support from empirical studies using a variety of language models to estimate surprisal. However, no algorithmic theory of processing has been proposed which scales linearly in surprisal. Additionally, recent empirical work has also begun raise questions about the assumption of linearity. We present a study specifically aimed at discerning the general shape of the linking function, using a collection of modern pretrained language models (LMs) to estimate surprisal. We find evidence of a superlinear effect on reading time. We also find that the better a language model’s predictions are on average, the more clearly the relationship is between surprisal and processing is superlinear. These results suggest revising the linearity hypothesis of surprisal theory, and can provide support for algorithmic theories of human language processing which scale faster than linearly in surprisal.