» » »

Scaling ML/AI workloads with Ray Ecosystem - Livestream

Jules Damji

Modern machine learning (ML) workloads, such as deep learning and large-scale model training, are compute-intensive and require distributed execution. Ray is an open-source, distributed framework from U.C. Berkeley’s RISELab that easily scales Python applications and ML workloads from a laptop to a cluster, with an emphasis on the unique performance challenges of ML/AI systems. It is now used in many production deployments.

This talk will cover Ray’s overview, architecture, core concepts, and primitives, such as remote Tasks and Actors; briefly discuss Ray native libraries (Ray Tune, Ray Train, Ray Serve, Ray Datasets, RLlib); and Ray’s growing ecosystem.

Through a demo using XGBoost for classification, we will demonstrate how you can scale training, hyperparameter tuning, and inference - from a single node to a cluster, with tangible performance difference when using Ray.

The takeaways from this talk are :

  • Learn Ray architecture, core concepts, and Ray primitives and patterns
  • Why Distributed computing will be the norm not an exception
  • How to scale your ML workloads with Ray libraries:
  • Training on a single node vs. Ray cluster, using XGBoost with/without Ray
  • Hyperparameter search and tuning, using XGBoost with Ray Tune
  • Inferencing at scale, using XGBoost with/without Ray

Speaker: Jules Damji, Anyscale Inc

Register to receive connection information

 

Monday, 07/25/22

Contact:

Website: Click to Visit

Cost:

Free

Save this Event:

iCalendar
Google Calendar
Yahoo! Calendar
Windows Live Calendar

SF Bay Association of Computing Machinery


, CA

Categories: