Главная
Study mode:
on
1
Intro
2
The Age of the Intended Machine
3
Basic Solutions
4
Integration
5
Computation
6
Common Practice
7
LevelBased Solution
8
Accuracy Engineering
9
Vertical Integration
10
Chips
11
Architecture
12
Natural Quality Stream
13
Tensor Partitions
14
Dynamic Programming
15
Advanced Neural Network
16
Graph Computing
17
Zero Computation
18
Zero Pruning
19
Israelites
20
Distributed Learning
21
Distributed Mobile Training
22
Clustering
23
Lottery ticket hypothesis
24
Deep learning
25
Delivery network
26
Privacy
27
Neural Network Research
28
Neural Active Search
29
Topology Awareness
30
Predictor
31
DAG
32
Neural Network Design
33
Summary
34
Nonconventional powers
35
Questions
36
Edge Impulse
Description:
Explore a comprehensive tinyML talk on software and hardware co-design for tiny AI systems. Delve into efficient AI models through hardware-friendly model compression and topology-aware Neural Architecture Search, optimizing quality-efficiency trade-offs. Learn about cross-optimization design and efficient distributed learning for swift and scalable AI systems with specialized hardware. Discover enhancements in quality-efficiency trade-offs for alternative applications like Electronic Design Automation (EDA) and Adversarial Machine Learning. Gain insights into the future of full-stack tiny AI solutions, covering topics such as intended machines, integration, computation, accuracy engineering, neural networks, distributed learning, privacy, and edge computing. Join Yiran Chen, Chair of ACM SIGDA, as he presents a vision for the future of tiny AI systems in this hour-long exploration of cutting-edge technologies and methodologies.

TinyML Talks - Software-Hardware Co-design for Tiny AI Systems

tinyML
Add to list
0:00 / 0:00