Главная
Study mode:
on
1
Intro
2
Today's Al is too Big
3
Deep Compression
4
Pruning & Sparsity
5
Once-for-All Network: Roofline Analysis
6
OFA Designs Light-weight Model, Bring Alto Mobile Devices
7
NAAS: Neural Accelerator Architecture Search
8
Application Specific Optimizations
9
TinyML for Video Recognition
10
TinyML for Point Cloud & LIDAR Processing
11
SpAtten: Sparse Attention Accelerator
12
TinyML for Natural Language Processing
13
Tiny Transfer Learning
Description:
Explore cutting-edge techniques for efficient deep learning and TinyML in this plenary talk from tinyML Asia 2021. Discover how to put AI on a diet as MIT EECS Assistant Professor Song Han presents innovative approaches to model compression, neural architecture search, and new design primitives. Learn about MCUNet, which enables ImageNet-scale inference on micro-controllers with only 1MB of Flash, and the Once-for-All Network, an elastic neural architecture search method adaptable to various hardware constraints. Gain insights into advanced primitives for video understanding and point cloud recognition, including award-winning solutions from low-power computer vision challenges. Understand how these TinyML techniques can make AI greener, faster, and more accessible, addressing the global silicon shortage and enabling practical deployment of AI applications across various domains.

Putting AI on a Diet: TinyML and Efficient Deep Learning

tinyML
Add to list