Главная
Study mode:
on
1
Intro
2
Next tiny ML Talks
3
Computing Hardware Has Been in Every Corner
4
Today's New Challenges
5
One Network Cannot work for All Platforms
6
Datasets/Applications, Hardware, and Neural Networks
7
Outline of Talk
8
AutoML: Neural Architecture Search (NAS)
9
AutoML: Differentiable Architecture Search
10
AutoML: Hardware-Aware NAS
11
AutoML: Network-FPGA Co-Design Using NAS
12
Two Paths from Cloud to Tiny ML
13
Motivation: Template Pool
14
Motivation: Heterogeneous ASICS
15
Problem Statement
16
ASICNAS Framework
17
ASICNAS: Controller and Selector
18
ASICNAS: Evaluator
19
Results: Design Space Exploration
20
Comparison Results on Multi-Dataset Workloads
21
Future Work: Network-CIM Co-Design to Resolve Memory Bot
22
Conclusion: Take Away (1)
23
Arm: The Software and Hardware Foundation for tiny
24
TinyML for all developers Dataset
25
Qeexo AutoML for.Embedded Al
Description:
Explore a novel machine learning-driven hardware and software co-exploration framework for designing energy-efficient AI accelerators for edge devices in this tinyML Talk. Delve into Dr. Weiwen Jiang's presentation on overcoming the challenge of automating the design of hardware accelerators for neural networks. Learn how this framework simultaneously explores both the architecture search space and the hardware design space to identify optimal neural architecture and hardware pairs, maximizing accuracy and hardware efficiency. Discover how this approach significantly advances the Pareto frontier between hardware efficiency and model accuracy, enabling better design tradeoffs and faster time to market for flexible accelerators designed from the ground up. Gain insights into the importance of this practice for running machine learning on resource-constrained edge devices and its potential to revolutionize the field of tiny machine learning.

Using AI to Design Energy-Efficient AI Accelerators for the Edge

tinyML
Add to list
0:00 / 0:00