Главная
Study mode:
on
1
Introduction
2
Welcome
3
Challenges
4
Qualcomms research
5
AIMET overview
6
AIMET features
7
AIMET quantization library
8
GitHub
9
Snapchat
10
Quantization performance
11
QA
12
RNN support
13
Presentation
14
Why quantize
15
Quantization model
16
Posttraining techniques
17
Questions
18
Data Free Quantization
19
Cross Layer Equalization
20
Results
Description:
Explore advanced network quantization and compression techniques in this comprehensive tutorial from the tinyML Summit 2021. Dive into the challenges of implementing AI on end devices with limited power and thermal budgets. Learn about Qualcomm's research in novel quantization and compression methods to overcome these obstacles. Discover how to implement these techniques using the AI Model Efficiency Toolkit (AIMET). Gain insights into existing challenges, Qualcomm's innovative solutions, and the practical application of AIMET features. Understand the importance of quantization, various quantization models, and post-training techniques. Explore concepts such as Data Free Quantization and Cross Layer Equalization, and examine their performance results. Perfect for developers and researchers looking to optimize AI models for resource-constrained environments.

Advanced Network Quantization and Compression Using AIMET - tinyML Summit 2021

tinyML
Add to list