» » »

From Generative Models to Control: Representation-Based Reinforcement Learning in Physical Systems

Na Li

The explosive growth of machine learning and data-driven methodologies has revolutionized numerous fields. Yet, translating these successes to dynamical physical systems remains a significant challenge, hindered by the complexity, uncertainty, and safety-critical nature of such environments. In this talk, we present a unified framework that bridges this gap by introducing novel generative representations for reinforcement learning and control. On the critic side, we develop a structured representation of system dynamics that focuses on modeling how actions influence future state distributions. This transition-based perspective enables the design of nonlinear stochastic control and reinforcement learning algorithms that are efficient, safe, robust, and scalable, with provable guarantees. On the actor side, we represent stochastic feedback policies using diffusion-based generative models, treating control as a generative process. This approach leads to new methods for policy optimization, while providing a flexible and expressive framework for decision-making in dynamical systems. We further demonstrate how these representations help close the sim-to-real gap, improve data efficiency in imitation learning, and enable scalable computation of localized policies for large-scale nonlinear networked systems, with applications including robotics and energy systems.

Speaker: Na Li, Harvard University

Thursday, 04/23/26

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Environment and Energy Building (Y2E2)

Stanford University
Room 292A
Stanford, CA 94305