Squeezing down the computing requirements of deep neural networks
![]()
Deep Neural Networks (DNNs) have enabled breakthrough levels of accuracy on a variety of tasks in vision, audio, and text. However, DNNs can be quite computationally-intensive, and highly-accurate DNNs often require a full-sized GPU server for real-time inference. To squeeze DNNs into smaller computing footprints, there are a number of techniques, including better DNN design, DNN quantization, better implementations of DNNs, and better utilization of specialized computing hardware. This talk touches on all these techniques, with particular focus on better DNN design for computer vision. Recently, Neural Architecture Search (NAS) technologies have begun to make significant progress in automating the process of designing “squeezed†DNNs, and we cover some of the latest work on NAS in this talk.
Speaker: Forrest Iandola, Global Network Engineering
Wednesday, 07/24/19
Contact:
Website: Click to VisitCost:
FreeSave this Event:
iCalendarGoogle Calendar
Yahoo! Calendar
Windows Live Calendar
