Главная
Study mode:
on
1
Intro & Outline
2
What is JAX
3
Speed comparison
4
Drop-in Replacement for NumPy
5
jit: just-in-time compiler
6
Limitations of JIT
7
grad: Automatic Gradients
8
vmap: Automatic Vectorization
9
pmap: Automatic Parallelization
10
Example Training Loop
11
What’s the catch?
Description:
Dive into a comprehensive 27-minute crash course on JAX, exploring its capabilities as a NumPy-compatible library for accelerating machine learning code on CPU, GPU, and TPU. Learn about JAX's key features, including its drop-in replacement for NumPy, just-in-time compilation with jit(), automatic gradient computation using grad(), vectorization with vmap(), and parallelization through pmap(). Discover how JAX compares to other libraries in terms of speed and examine its limitations. Follow along with practical examples, including an implementation of a training loop, and gain insights into when and why to use JAX for high-performance machine learning research.

JAX Crash Course - Accelerating Machine Learning Code

AssemblyAI
Add to list
0:00 / 0:00