At next week’s meeting, Wilfred will be presenting the following paper: “Learning Semantic Correspondence with Less Supervision” by Liang et al. (2009). Please find the abstract below:

A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high de- gree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty—Robocup sportscasting, weather forecasts (a new domain), and NFL recaps.

Meeting will be Wednesday Oct 17 from 5:30pm to 7:30pm at room 117.