» » »

Squeezing down the computing requirements of deep neural networks

Forrest Iandola

Deep Neural Networks (DNNs) have enabled breakthrough levels of accuracy on a variety of tasks in vision, audio, and text. However, DNNs can be quite computationally-intensive, and highly-accurate DNNs often require a full-sized GPU server for real-time inference. To squeeze DNNs into smaller computing footprints, there are a number of techniques, including better DNN design, DNN quantization, better implementations of DNNs, and better utilization of specialized computing hardware. This talk touches on all these techniques, with particular focus on better DNN design for computer vision. Recently, Neural Architecture Search (NAS) technologies have begun to make significant progress in automating the process of designing “squeezed” DNNs, and we cover some of the latest work on NAS in this talk.

Speaker: Forrest Iandola, Global Network Engineering

Wednesday, 07/24/19

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Share this Event:

IEEE

673 S Milpitas Blvd
Milpitas, CA 95035

Categories: