At this week’s MCQLL meeting, Jacob Hoover will be presenting Not just surprisal: Towards a theory of incremental processing cost. An abstract follows.

We will be meeting this Tuesday, April 4 at 3:00 PM. Meetings will be held both in person in room 117 of the McGill Linguistics department at 1085 Dr Penfield and on Zoom here.

Title: Not just surprisal: Towards a theory of incremental processing cost
Abstract: When a human comprehends language (such as during reading), the amount of computational effort required per input item is not constant. In particular, words that are more surprising tend to take longer to read. However, while this empirical fact is well-recognized, most theories of incremental processing do not directly predict it. In this talk I will begin by discussing recent work (Hoover et al. 2022, under review) which argues that sampling algorithms provide a plausible framework for formalizing theories of incremental processing, since their runtime can scale naturally in surprisal.
Building on this work, in the second part of the talk I will explore the idea that processing cost should be thought of as a function of a KL-divergence (between approximate and true posterior distributions). This divergence equals surprisal only with simplifying assumptions, and using this quantity as a model of processing time differs from surprisal theory (for example, predicting faster processing for items which are surprising but do not cause a change in beliefs). I will offer speculative ideas for future experiments in this space, and welcome feedback on these ideas.