» » »

Theories of Neural Computation Underlying Learning, Imagination, Reasoning and Scaling: of Mice and Machines

Surya Ganguli

Three remarkable abilities of brains and machines are to: (1) learn new behaviors from a single example, (2) creatively imagine new possibilities, (3) learn language, and (4) perform mathematical reasoning.  I will discuss simple analytic yet quantitatively predictive theories of how (1) mice learn to accurately navigate on the first encounter in a new environment; (2) how diffusion models creatively imagine exponentially many new images; (3) how the structure of natural language governs how much data is required to learn it, and (4) how language models can better do mathematically reasoning.  Theoretical physics approaches are essential in deriving all of these theories, spanning techniques like statistical mechanics, pattern formation, nonlinear dynamics, high dimensional geometry, scaling analysis, and control of entropy.  More generally, just as biology once provided a new frontier of complexity for physics to study, I suggest that AI now provides a new frontier in which physics can expand to yield a new, fundamental scientific understanding of intelligence.

Speaker: Surya Ganguli, Stanford University

Tuesday, 02/10/26

Contact:

Website: Click to Visit

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Hewlett Teaching Center

370 Jane Stanford Way, Room 201
Stanford University
Stanford, CA 94305

Website: Click to Visit