Главная
Study mode:
on
1
Intro
2
Outline
3
Why on device?
4
TensorFlow vs TensorFlow Lite
5
Why not TFMobile?
6
What is TensorFlow Lite?
7
Interpreter
8
Acceleration (Delegates)
9
Model conversion (Python)
10
Inference (Java)
11
Selective Registration (Bazel)
12
Selective Registration (C++)
13
Performance
14
Benchmarking (Android)
15
Inference w/ NNAPI
16
Inference w/ GPU passthrough
17
Fast execution
18
Model conversion w/ Post-Training Quant (Hybrid)
19
Optimization
20
Model conversion w/ Post-Training Quant (Full)
21
Documentation
22
Model repository
23
TensorFlow Lite Roadmap
24
Questions?
Description:
Dive into a 38-minute technical deep dive on TensorFlow Lite, presented by Software Engineer Jared Duke from the TensorFlow team. Explore how TensorFlow Lite enables deployment of machine learning models on mobile and IoT devices. Learn about the differences between TensorFlow and TensorFlow Lite, the interpreter, acceleration techniques using delegates, model conversion in Python, and inference in Java. Discover selective registration processes in Bazel and C++, performance optimization strategies, benchmarking on Android, and inference with NNAPI and GPU passthrough. Gain insights into fast execution methods, post-training quantization techniques for model conversion, and available documentation resources. Get a glimpse of the TensorFlow Lite roadmap and participate in a Q&A session to deepen your understanding of this powerful tool for on-device machine learning.

Inside TensorFlow - TensorFlow Lite

TensorFlow
Add to list