PURPOSE BUILT PRE-TRAINED NETWORKS Highly Accurate Re-Trainable Out of Box Deployment
6
QUANTIZATION AWARE TRAINING Maintain comparable Performance & Sperdup Inference using INTB Precision
7
AUTOMATIC MIXED PRECISION (AMP) Train with half-precision while maintaining network accuracy same as single precision
8
INSTANCE SEGMENTATION - MASK R-CNN
9
PEOPLENET
10
FACE MASK DETECTION
11
TRAINING WORKFLOW
12
CONVERT TO KITTI
13
TLT SPEC FILES
14
PREPARE THE DATASET
15
TRAIN - PRUNE - EVALUATE
16
TRAINING SPEC - DATASET AND MODEL
17
EVALUATION SPEC
18
TRAINING & EVALUATION
19
MODEL PRUNING
20
RE-TRAIN & EVALUATE
21
TRAINING KPI
22
QUANTIZATION & EXPORT
23
INFERENCE SPEC
24
DEPLOY USING DEEPSTREAM
25
SUMMARY
Description:
Discover how to accelerate the creation of vision AI models using NVIDIA Transfer Learning Toolkit and pre-trained models in this 47-minute video. Explore the wide adoption of vision AI across industries and learn how to overcome the challenges of training accurate and performant deep learning models. Dive into the Transfer Learning Toolkit, a simplified AI toolkit for developers to train models without coding. Examine pre-trained models available on NGC and learn how to fine-tune them for specific use cases. Explore techniques like model pruning and INT8 quantization to build high-performance models for inference, reducing development effort and speeding time to market. Cover topics such as quantization-aware training, automatic mixed precision, instance segmentation, and specific models like PeopleNet and Face Mask Detection. Walk through the training workflow, including data preparation, model specification, training, evaluation, and deployment using DeepStream.
Accelerating Vision AI Applications Using NVIDIA Transfer Learning Toolkit and Pre-Trained Models