Spandana Gella, research scientist at Amazon AI, will present on “Robust Natural Language Processing with Multi-task Learning” at this week’s Montreal Computational and Quantitative Linguistics Lab meeting. We are meeting Wednesday, April 22nd, at 2:00 via Zoom (to be added to the MCQLL listserve, please contact Jacob Hoover at jacob.hoover@mail.mcgill.ca).
Abstract:

In recent years, we have seen major improvements to various Natural Language Processing tasks. Despite their human-level performance on benchmarking datasets, recent studies have shown that these models are vulnerable to adversarial examples. It is shown that these models are relying on spurious correlations that hold for the majority of examples and suffer from distribution shifts and fail on atypical or challenging test sets. Recent work has shown that large pre-trained models improve model robustness to spurious associations in the training data.  We observe that superior performance of large pre-trained language models comes from their better generalization from a minority of training examples that resemble the challenging sets. Our study shows that multi-task learning with the right auxiliary tasks improves accuracy on adversarial examples without hurting in distribution performance. We show that this holds true for multi-modal task of Referring Expression Recognition and text-only tasks of Natural language inference and Paraphrase identification.