Главная
Study mode:
on
1
- intro
2
- clone Torch-TensorRT
3
- install and setup Docker
4
- install Nvidia Container Toolkit & Nvidia Docker 2
5
- Torch-TensorRT container option #1
6
- Torch-TensorRT Nvidia NGC container option #2
7
- import Pytorch
8
- load ResNet50
9
- load sample image
10
- sample image transforms
11
- batch size
12
- prediction with ResNet50
13
- softmax function
14
- ImageNet class number to name mapping
15
- predict top 5 classes of sample image topk
16
- speed test benchmark function
17
- CPU benchmarks
18
- CUDA benchmarks
19
- trace model
20
- convert traced model into a Torch-TensorRT model
21
- TensorRT benchmarks
22
- download Jupyter Notebook
23
- HOW DID I MISS THIS???
24
- thanks for watching!
Description:
Explore deep learning prediction using Torch-TensorRT in this comprehensive tutorial video. Learn to accelerate inference speed by comparing CPU, CUDA, and TensorRT implementations. Set up the development environment with Docker and Nvidia tools, then dive into using PyTorch to load and utilize the ResNet50 neural network for image classification. Discover techniques for image preprocessing, batch processing, and interpreting model predictions. Implement and analyze benchmarks to compare performance across different hardware configurations. Follow along to trace models, convert to TensorRT, and optimize inference speed. Gain practical insights into deep learning deployment and performance optimization for beginners and intermediate practitioners alike.

Inference with Torch-TensorRT Deep Learning Prediction for Beginners - CPU vs CUDA vs TensorRT

Python Simplified
Add to list
0:00 / 0:00