Главная
Study mode:
on
1
- Intro & High-Level Overview
2
- The Problem with Evaluating Machine Translation
3
- Task Evaluation as a Learning Problem
4
- Naive Fine-Tuning BERT
5
- Pre-Training on Synthetic Data
6
- Generating the Synthetic Data
7
- Priming via Auxiliary Tasks
8
- Experiments & Distribution Shifts
9
- Concerns & Conclusion
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore a comprehensive video explanation of the BLEURT paper, which proposes a learned evaluation metric for text generation models. Dive into the challenges of evaluating machine translation systems and learn how BLEURT addresses these issues through a novel pre-training scheme using synthetic data. Discover the key components of the approach, including fine-tuning BERT, generating synthetic data, and priming via auxiliary tasks. Examine the experimental results, distribution shifts, and potential concerns associated with this innovative metric. Gain insights into the state-of-the-art performance of BLEURT on recent WMT Metrics shared tasks and the WebNLG Competition dataset.

BLEURT - Learning Robust Metrics for Text Generation

Yannic Kilcher
Add to list
0:00 / 0:00