Главная
Study mode:
on
1
Intro
2
Why Conversational Al/chatbots?
3
Chatbot Conversation Framework
4
Use-case in hand
5
Chatbot Flow Diagram
6
Components of NLU Engine
7
Transformers for Intent Classification
8
BERT: Bidirectional Encoder Representations from Transformers
9
Masked Language Model
10
Next Sentence Prediction
11
BERT: CLS token for classification
12
Different models with accuracy and size over time
13
Use-case data summary
14
Model Training
15
Efficient Model Inference
16
Knowledge Distillation
17
Quantization
18
No padding
19
Productizing BERT for CPU Inference
20
Ensembling LUIS and DistilBERT
21
Team behind the project
Description:
Explore the world of conversational AI and transformer models in this 36-minute video from Databricks. Delve into the building blocks of conversational agents and natural language understanding engines, focusing on the advantages of transformer models over traditional RNN/LSTM approaches. Learn about knowledge distillation and model compression techniques for deploying parameter-heavy models in resource-limited production environments. Gain insights into the flow of conversational agents, the benefits of transformer-based models, and various model compression techniques including quantization. Discover practical applications with sample code in PyTorch and TensorFlow 2, and understand key concepts such as BERT, masked language models, and next sentence prediction. By the end of this talk, you'll have a comprehensive understanding of how to build and optimize conversational AI systems using cutting-edge transformer models.

Conversational AI with Transformer Models - Building Blocks and Optimization Techniques

Databricks
Add to list
0:00 / 0:00