» » »

Machine Learning---Why Do Simple Algorithms Work So Well?

While state-of-the-art machine learning models are deep, large-scale, sequential and highly nonconvex, the backbone of modern learning algorithms are simple algorithms such as stochastic gradient descent, or Q-learning (in the case of reinforcement learning tasks). A basic question endures---why do simple algorithms work so well even in these challenging settings?

This talk focuses on two fundamental problems: (1) in nonconvex optimization, can gradient descent escape saddle points efficiently? (2) in reinforcement learning, is Q-learning sample efficient? We will provide the first line of provably positive answers to both questions. In particular, we will show that simple modifications to these classical algorithms guarantee significantly better properties, which explains the underlying mechanisms behind their favorable performance in practice.

Speaker: Chi Jin, UC Berkeley

Wednesday, 05/08/19

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Share this Event:

Cory Hall

UC Berkeley
Room 540 A/B
Berkeley, CA 94720

Categories: