At next week’s MCQLL meeting, Jacob will be presenting about a new research project on finding syntactic structures in contextual embeddings.

A recent paper by Hewitt and Manning shows that the pretrained embeddings of contextual neural networks (ELMo, BERT) encode information about dependency structure (more concretely: a learned linear transformation (“probe”) on top of pre-trained embeddings is able to reconstruct dependencies based on the Penn Treebank). Several questions arise when considering this result: Does BERT have a theory of syntax? What is does this mean? What structure/information is this probe extracting from the embeddings? Jacob will introduce the results from this paper for a general audience, and discuss some of these questions as current ideas for research.

The meeting will be in room 117 at 14:30 on Wednesday.