Главная
Study mode:
on
1
Introduction
2
Agenda
3
Hardware detector
4
Streaming
5
Subclass API
6
Edge cases
7
Quantization
8
Posttraining Quantization
9
Fake Quantization
10
Native Quantization
11
Observations
12
Local Selfattention
13
MultiHealth Selfattention
14
Real Model
15
Models
Description:
Explore on-device speech model optimization and deployment in this tinyML Summit 2022 presentation. Dive into the challenges of real-time execution on mobile hardware, focusing on latency and memory footprint constraints. Learn about streaming-aware model design using functional and subclass TensorFlow APIs, and discover various quantization techniques including post-training quantization and quantization-aware training. Compare the pros and cons of different approaches and understand selection criteria based on specific ML problems. Examine benchmarks of popular speech processing model topologies, including residual convolutional and transformer neural networks, as demonstrated on mobile devices. Gain insights into local self-attention, multi-head self-attention, and real-world model implementations to enhance your understanding of efficient on-device speech processing.

On-Device Speech Models Optimization and Deployment for Mobile Hardware

tinyML
Add to list
0:00 / 0:00