At the meeting of MCQLL this week, Siva Reddy will discuss ongoing work on Measuring Stereotypical Bias in Pretrained Neural Network Models of Language.

A key ingredient behind the success of neural network models for language is pretrained representations: word embeddings, contextual embeddings and pretrained architecutures. Since pretrained representations are obtained from learning on massive text corpora, there is a danger that unwanted societal biases are reflected in these models. I will discuss ideas on how to assess these biases in popular pretrained language models.

This meeting will be in room 117 at 14:30 on Wednesday, October 9th.