Evaluating Natural Language Understanding

It is common to hear that certain natural language understanding (NLU) tasks have been "solved". These claims are often misconstrued as being about general human capabilities (e.g., to answer questions), but they are always actually about how systems performed on narrowly defined evaluations. Recently, adversarial testing methods have begun to expose how narrow many of these successes are. In this talk, I'll discuss what these results tell us about progress in the field, and I'll argue that they should prompt us to move beyond standard accuracy-based evaluations, to ask deeper questions of our NLU models.
Speaker: Chris Potts, Stanford
Editor's Note: Stuart Russell, Stanford, originally schedule to speak on this date will not give his talk. Chris Potts replaces Stuart Russell.
Monday, 11/18/19
Contact:
Website: Click to VisitCost:
FreeSave this Event:
iCalendarGoogle Calendar
Yahoo! Calendar
Windows Live Calendar
Stanford Symbolic Systems Forum
460-126
Stanford, CA 94305
Website: Click to Visit
