People and Robots Seminar
Although wildly successful, deep learning systems are also extremely brittle; this is evidenced for example by the widespread possibility of adversarial attacks, specially crafted inputs meant to fool deep classifiers. This talk will discuss our recent work in develop deep classifiers that are provably robust to (certain classes of) perturbation attacks. Our methods work by considering a convex relaxation of the "adversarial polytope", the set of last-layer activations achievable under some norm-bounded perturbation of the input, and using these to derive very efficient methods for computing (and then minimizing) upper bounds the adversarial loss that can be suffered under such attacks. The method leads to some of the largest verified networks of which we are currently aware, including a convolutional MNIST classifier with a provable bound of 3.7% error under L_infinity perturbations of size epsilon=0.1. I'll relate our work to similar ongoing directions, and also discuss the main challenges that it faces: the task of scaling to significantly larger networks, e.g. at ImageNet scale, and the task of better characterizing the "correct" set of perturbations that we would like to be robust to. I'll also connect the work to efforts in proving properties about deep networks in other settings, such as control and general verification.
Speaker: Zico Kolter, Carneigie Mellon Univ
This event was originally scheduled for November 5.
Monday, 10/01/18
Contact:
Website: Click to VisitCost:
FreeSave this Event:
iCalendarGoogle Calendar
Yahoo! Calendar
Windows Live Calendar
