» » »

People and Robots Seminar

Before learning robots can be deployed in the real world, it is critical that probabilistic guarantees can be made about the safety and performance of such systems.  In recent years, so-called "high-confidence" reinforcement learning algorithms have enjoyed success in application areas with high-quality models and plentiful data, but robotics remains a challenging domain for scaling up such approaches. Furthermore, very little work has been done on the even more difficult problem of safe imitation learning, in which the demonstrator's reward function is not known.  This talk focuses on new developments in three key areas for scaling safe learning to robotics: (1) a theory of safe imitation learning; (2) scalable inverse reinforcement learning in the absence of models; (3) efficient policy evaluation.  The proposed algorithms offer a blend of safety and practicality, making a significant step towards high-confidence robot learning with modest amounts of real-world data.

Speaker: Scott Niekum, University of Texas, Austin

This event was originally scheduled for March 11.

Monday, 04/15/19

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Share this Event:

Sutardja Dai Hall

UC Berkeley
Room 250
Berkeley, CA 94720