» » »

Evaluating Natural Language Understanding

Chris Potts

It is common to hear that certain natural language understanding (NLU) tasks have been "solved". These claims are often misconstrued as being about general human capabilities (e.g., to answer questions), but they are always actually about how systems performed on narrowly defined evaluations. Recently, adversarial testing methods have begun to expose how narrow many of these successes are. In this talk, I'll discuss what these results tell us about progress in the field, and I'll argue that they should prompt us to move beyond standard accuracy-based evaluations, to ask deeper questions of our NLU models.

Speaker: Chris Potts, Stanford

Editor's Note:  Stuart Russell, Stanford, originally schedule to speak on this date will not give his talk.  Chris Potts replaces Stuart Russell.

Monday, 11/18/19


Website: Click to Visit



Save this Event:

Google Calendar
Yahoo! Calendar
Windows Live Calendar

Stanford Symbolic Systems Forum

Margaret Jacks Hall
Stanford, CA 94305

Website: Click to Visit