» » »

How do models learn? - Livestream

How can we explain how deep neural networks arrive at decisions? Feature representation is complex and to the human eye opaque; instead a set of interpretability tools intuit what the model has learned by looking at what inputs it pays attention to. This talk will talk about interpretability tools for deep neural networks, how we assess reliability of explanation tools and also new research directions which aim to automatically surface a subset of examples that are more challenging for a model to learn. Papers that will be discussed: https://arxiv.org/abs/2008.11600, https://arxiv.org/abs/1911.05248

Agenda:

4:50 pm - 5:00 pm Arrival and socializing
5:00 pm - 5:10 pm Opening
5:10 pm - 6:50 pm Sara Hooker, "How do models learn?"
6:50 pm - 7:00 pm Q&A

About Sara Hooker:

Speaker: Sara Hooker is a researcher at Google Brain
Zoom link
Webinar ID: 870-8971-1972

Wednesday, 09/30/20

Contact:

Enes

Phone: +1(408)4754348
Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Magnimind Academy


, CA