Главная
Study mode:
on
1
Fine-tuning Llama 3 for a low resource language
2
Overview of Wikipedia Dataset and Loss Curves
3
Video overview
4
HuggingFace Dataset creation with WikiExtractor
5
Llama 3 fine-tuning setup, incl. LoRA
6
Dataset blending to avoid catastrophic forgetting
7
Trainer setup and parameter selection
8
Inspection of losses and results
9
Learning Rates and Annealing
10
Further tips and improvements
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the process of fine-tuning Llama 3 for low-resource languages using Wikipedia datasets in this comprehensive 44-minute tutorial. Learn how to create a HuggingFace dataset using WikiExtractor, set up Llama 3 fine-tuning with LoRA, and implement dataset blending to prevent catastrophic forgetting. Dive into trainer setup, parameter selection, and loss inspection. Gain insights on learning rates, annealing, and additional tips for improving your fine-tuning results. Access provided resources including slides, dataset links, and code repositories to enhance your learning experience.

Fine-tuning Llama 3 on Wikipedia Datasets for Low-Resource Languages

Trelis Research
Add to list
0:00 / 0:00