Several members of the Montreal Computational and Quantitative Linguistics Laboratory (MCQLL) have had papers accepted at major conferences.

Benjamin LeBrun, Alessandro Sordoni and Timothy J. O’Donnell’s paper titled Evaluating Distributional Distortion in Neural Language Modeling has been accepted for to appear at the International Conference on Learning Representations (ICLR). The paper investigates whether neural language models accurately approximate the heavy-tail of rare sequences characteristic of distributions in natural language.

Emily Goodwin, Siva Reddy, Timothy J. O’Donnell, and Dzmitry Bahdanau’s paper entitled Compositional Generalization in Dependency Parsing has been accepted to appear at the 2022 Annual Meeting of the Association for Computational Linguistics. This paper introduces CFQ-DEP, a compositional generalization challenge for dependency parsing, and shows that a state-of-the-art dependency parser struggles with more compositionally challenging generalizations.

Michaela Socolof, Jackie CK Cheung, Michael Wagner, and Timothy J. O’Donnell’s paper entitled Characterizing Idioms: Conventionality and Contingency has been accepted to appear at the 2022 Annual Meeting of the Association for Computational Linguistics. The paper defines two cognitively-motivated measures that together characterize idioms, providing evidence that linguistic theories requiring special machinery for idioms may be unwarranted.