» » »

Making neural net classifiers more robust and explainable: Lessons from Adversarial AI

As deep neural nets achieve ever greater successes, efforts to break them and learn about their failure modes are also ramping up. Security experts and malicious actors are interested in weaknesses per se, and we scientists are more interested in what we can learn about robustness to inputs somewhat different from training data. I will give examples of attacks and defenses, and talk about a measure of credibility at inference time.

Speaker: Doug Finkbeiner, Harvard

Room 50-5132

Friday, 11/09/18

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Share this Event:

Lawrence Berkeley National Laboratory

1 Cyclotron Road
Berkeley, CA 94720
USA

Website: Click to Visit