» » »

Lessons from scale for large language models and quantitative reasoning

Large language models trained on diverse training data have shown impressive results on many tasks involving natural language -- in many cases matching or exceeding human performance. Some measures of progress exhibit remarkably robust power-law improvement over many orders of magnitude in dataset, model and compute scale, while other capabilities remain difficult to extrapolate. One domain which has traditionally been challenging for such models is multi-step quantitative reasoning for mathematics and science. I will discuss recent progress attempting to understand and extrapolate model capabilities with scale and Minerva, a large language model designed to perform multi-step STEM problem solving.

Speaker: Ethan Dyer, Google Blueshift

This speaker was originally scheduled to present on November 1.

Tuesday, 11/15/22

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Hewlett Teaching Center

370 Jane Stanford Way, Room 200
Stanford University
Stanford, CA 94305

Website: Click to Visit