As deep neural nets achieve ever greater successes, efforts to break them and learn about their failure modes are also ramping up. Security experts and malicious actors are interested in weaknesses per se, and we scientists are more interested in what we can learn about robustness to inputs somewhat different from training data. I will give examples of attacks and defenses, and talk about a measure of credibility at inference time.
Speaker: Doug Finkbeiner, Harvard
Contact:Website: Click to Visit
Save this Event:iCalendar
Windows Live Calendar
Share this Event:Email to a Friend
Berkeley, CA 94720
Website: Click to Visit