» » »

AI Inference with Intel® FPGA AI Suite - Livestream

Kevin Drake

Intel® FPGAs enable real-time, low-latency, and low-power deep learning inference combined with the following advantages:

  • I/O flexibility
  • Reconfiguration
  • Ease of integration into custom platforms
  • Long lifetime

Intel® FPGA AI Suite was developed with the vision of ease-of-use of artificial intelligence (AI) inference on Intel® FPGAs. The suite enables FPGA designers, machine learning engineers, and software developers to create optimized FPGA AI platforms efficiently.

Utilities in the Intel FPGA AI Suite speed up FPGA development for AI inference using familiar and popular industry frameworks such as TensorFlow* or PyTorch* and OpenVINO toolkit, while also leveraging robust and proven FPGA development flows with the Intel Quartus Prime Software. The Intel® FPGA AI Suite tool flow works with the OpenVINO toolkit, an open-source project to optimize inference on a variety of hardware architectures. The OpenVINO toolkit takes Deep Learning models from all the major Deep Learning frameworks (such as TensorFlow, PyTorch, Keras*) and optimizes them for inference on a variety of hardware architectures, including various CPUs, CPU+GPU, and FPGAs.

Speaker: Kevin Drake, Intel

Watch the lecture here

Thursday, 09/15/22

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

Sonoma State Engineering Colloquium


, CA