McGILL UNIVERSITY DEPARTMENT OF LINGUISTICS AND SCHOOL OF COMPUTER SCIENCE
Computing machinery such as smartphones are ubiquitous, and so will be smart home appliances, self-driving cars and robots in the near future. Enabling these machines with natural language understanding abilities opens up potential opportunities for the broader society to benefit from, e.g., in accessing the world’s knowledge, or in controlling complex machines with little effort.
In this talk, we will focus on the task of accessing knowledge stored in knowledge-bases and text documents in a colloquial manner. First, we will see how brittle the current models are to compositional and conversational language. Then we will explore how linguistic knowledge and inductive biases on neural architectures can circumvent these problems.
The scientific questions we will address are 1) Are linguistically-informed models better than uninformed models? 2) How can inductive biases help machine learning? and 3) What are the challenges in enabling conversational interactions? For building linguistically-informed models, I will propose a novel syntax-semantics interface based on typed lambda calculus for converting dependency syntax into formal semantic representations.
Siva Reddy is a postdoc in the Computer Science Department at Stanford University working with Chris Manning. His research goal is to understand universal semantic structures in languages and build linguistically-informed machine learning models to enable natural language interaction between humans and machines. His research is supported by grants from Amazon and Facebook. Before postdoc, he was a Google PhD fellow at the University of Edinburgh working with Mirella Lapata and Mark Steedman. His work experience includes an internship at Google and a research position at Sketch Engine.