starting the core Value object of micrograd and its visualization
6
manual backpropagation example #1: simple expression
7
preview of a single optimization step
8
manual backpropagation example #2: a neuron
9
implementing the backward function for each operation
10
implementing the backward function for a whole expression graph
11
fixing a backprop bug when one node is used multiple times
12
breaking up a tanh, exercising with more operations
13
doing the same thing but in PyTorch: comparison
14
building out a neural net library multi-layer perceptron in micrograd
15
creating a tiny dataset, writing the loss function
16
collecting all of the parameters of the neural net
17
doing gradient descent optimization manually, training the network
18
summary of what we learned, how to go towards modern neural nets
19
walkthrough of the full code of micrograd on github
20
real stuff: diving into PyTorch, finding their backward pass for tanh
21
conclusion
22
outtakes :
Description:
Dive into a comprehensive 2.5-hour video tutorial on neural networks and backpropagation, focusing on building micrograd from scratch. Learn step-by-step explanations of key concepts, starting with basic derivatives and progressing to implementing a full neural network library. Follow along as the instructor demonstrates manual backpropagation, builds core components, and compares implementations with PyTorch. Gain practical experience by creating datasets, writing loss functions, and performing gradient descent optimization. Conclude with a summary of modern neural network concepts and a walkthrough of the complete micrograd codebase on GitHub. Suitable for those with basic Python knowledge and a general understanding of calculus.
Intro to Neural Networks and Backpropagation - Building Micrograd