Главная
Study mode:
on
1
How to fine tune on a custom dataset
2
What dataset should I use for fine-tuning?
3
Fine-tuning in Google Colab
4
Loading Llama 2 with bitsandbytes
5
Fine-tuning with LoRA
6
Target modules for fine-tuning
7
Loading data for fine-tuning
8
Training Llama 2 with a validation set
9
Setting training parameters for fine-tuning
10
Choosing batch size for training
11
Setting gradient accumulation for training
12
Using an eval dataset for training
13
Setting warm-up parameters for training
14
Using AdamW for optimisation
15
Fix for when commands don't work in Colab
16
Evaluating training loss
17
Running inference after training
Description:
Learn how to fine-tune the Llama 2 language model for tone or style using a custom dataset in this 18-minute video tutorial. Explore the process of adapting the model to mimic Shakespearean language as an example. Discover techniques for loading Llama 2 with bitsandbytes, implementing LoRA for efficient fine-tuning, and selecting appropriate target modules. Gain insights into setting optimal training parameters, including batch size, gradient accumulation, and warm-up settings. Master the use of AdamW optimizer and learn to evaluate training loss effectively. Troubleshoot common issues in Google Colab and run inference with your newly fine-tuned model. Access additional resources for embedding creation, supervised fine-tuning, and advanced scripts to enhance your language model customization skills.

Fine-tuning Llama 2 for Tone or Style Using Shakespeare Dataset

Trelis Research
Add to list
0:00 / 0:00