Главная
Study mode:
on
1
Intro
2
Contrasting features in ViTs vs CNNs
3
Global vs Local receptive fields
4
Data matters, mr. obvious
5
Contrasting receptive fields
6
Data flow through CLS vs spatial tokens
7
Skip connections matter a lot in ViTs
8
Spatial information is preserved in ViTs
9
Features evolution with the amount of data
10
Outro
Description:
Explore a detailed analysis of the paper "Do Vision Transformers See Like Convolutional Neural Networks?" in this 35-minute video. Dive into the dissection of Vision Transformers (ViTs) and ResNets, examining the differences in learned features and the factors contributing to these disparities. Investigate the contrasts between global and local receptive fields, the impact of data quantity, and the importance of skip connections in ViTs. Gain insights into how spatial information is preserved in ViTs and observe the evolution of features as the amount of training data increases. Enhance your understanding of these advanced computer vision architectures through clear explanations and visual intuitions.

Do Vision Transformers See Like Convolutional Neural Networks - Paper Explained

Aleksa Gordić - The AI Epiphany
Add to list