Главная
Study mode:
on
1
Intro
2
ML collaboration with
3
Success of Deep Learning / AI
4
AI Algorithm & Edge Hardware
5
Typical DNN Accelerators
6
Eyeriss (JSSC 2017)
7
MCM Accelerator (JSSC 2020)
8
Bottleneck of All-Digital DNN HW Energy/Power
9
In-Memory Computing for DNNS
10
Analog IMC for SRAM Column
11
Analog SRAM IMC - Resistive
12
Analog SRAM IMC - Capacitive
13
ADC Optimization for IMC
14
Proposed IMC SRAM Macro Prototypes
15
Going Beyond IMC Macro Design
16
PIMCA: Programmable IMC Accelerator
17
IMC Modeling Framework
18
IMC HW Noise-Aware Training & Inference
19
Black-box Adversarial Input Attack
20
Pruning of Crossbar-based IMC Hardware
21
Acknowledgements
22
Contact Information
Description:
Explore SRAM-based in-memory computing for energy-efficient AI inference in this tinyML talk. Delve into recent silicon demonstrations, innovative memory bitcell circuits, peripheral circuits, and architectures designed to improve upon conventional row-by-row memory operations. Learn about a modeling framework for design parameter optimization and discover how these advancements address limitations in memory access and footprint for low-power AI processors. Gain insights into analog computation inside memory arrays, ADC optimization, programmable IMC accelerators, and noise-aware training and inference techniques. The talk also covers topics such as black-box adversarial input attacks and pruning of crossbar-based IMC hardware, providing a comprehensive overview of cutting-edge developments in energy-efficient AI inference.

TinyML Talks - SRAM Based In-Memory Computing for Energy-Efficient AI Inference

tinyML
Add to list