Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore the process of fine-tuning Llama 3 for low-resource languages using Wikipedia datasets in this comprehensive 44-minute tutorial. Learn how to create a HuggingFace dataset using WikiExtractor, set up Llama 3 fine-tuning with LoRA, and implement dataset blending to prevent catastrophic forgetting. Dive into trainer setup, parameter selection, and loss inspection. Gain insights on learning rates, annealing, and additional tips for improving your fine-tuning results. Access provided resources including slides, dataset links, and code repositories to enhance your learning experience.
Fine-tuning Llama 3 on Wikipedia Datasets for Low-Resource Languages