Главная
Study mode:
on
1
Introduction
2
Evolution of Vision Architectures
3
Hierarchy of SWIN vs. CNNs
4
Modernizing ConvNets
5
Modernizing ResNet
6
Macro Design Changes
7
Changing stage compute ratio
8
Changing stem to "Patch-ify"
9
Depthwise Conv. vs Self-Attention
10
Improvements
11
Inverted Bottleneck
12
Larger Kernel Sizes
13
Micro Designs (mD)
14
Replace RELU with GELU
15
Fewer Activation functions
16
Fewer Normalization Layers
17
Substituting BN with LN
18
Visualization
19
mD4- Improvement
20
Separate Downsampling Layer
21
Final ConvNext block
22
Networks for Evaluation
23
Training Settings
24
Machine Performance Comparison
Description:
Explore the evolution of vision architectures and modernization of convolutional neural networks in this 33-minute lecture from the University of Central Florida. Delve into the hierarchy of SWIN versus CNNs, macro design changes in ResNet, and improvements like inverted bottlenecks and larger kernel sizes. Examine micro designs, including activation function replacements and normalization layer adjustments. Learn about the final ConvNext block, network evaluation techniques, and compare machine performance across different architectures.

Computer Vision Architecture Evolution: ConvNets to Transformers - Lecture 21

University of Central Florida
Add to list