Главная
Study mode:
on
1
- Why fine-tuning?
2
- Text tutorial on MLExpert.io
3
- Fine-tuning process overview
4
- Dataset
5
- Lllama 3 8B Instruct
6
- Google Colab Setup
7
- Loading model and tokenizer
8
- Create custom dataset
9
- Establish baseline
10
- Training on completions
11
- LoRA setup
12
- Training
13
- Load model and push to HuggingFace hub
14
- Evaluation comparing vs the base model
15
- Conclusion
Description:
Learn how to fine-tune Llama 3 on a custom dataset for a RAG Q&A use case using a single GPU in this comprehensive 33-minute tutorial. Explore the benefits of fine-tuning, understand the process overview, and dive into practical steps including dataset preparation, model loading, custom dataset creation, and LoRA setup. Follow along with Google Colab setup, establish a baseline, train the model, and evaluate its performance against the base model. Gain insights into pushing the fine-tuned model to the HuggingFace hub and discover how even smaller models can outperform larger ones when properly fine-tuned for specific tasks.

Fine-Tuning Llama 3 on a Custom Dataset for RAG Q&A - Training LLM on a Single GPU

Venelin Valkov
Add to list