Главная
Study mode:
on
1
Top Ten Fine-tuning Tips
2
Tip 1: Start with a Small Model
3
Tip 2: Use LoRA or QLoRA
4
Tip 3: Create 10 manual questions
5
Tip 4: Create datasets manually
6
Tip 5: Start training with just 100 rows
7
Tip 6: Always create a validation data split
8
Tip 7: Start by only training on one GPU
9
Tip 8: Use weights and biases for logging
10
Scale up rows, tuning type, then model size
11
Tip 9: Consider unsupervised fine-tuning if you've lots of data
12
Tip 10: Use preference fine-tuning ORPO
13
Recap of the ten tips
14
Ten tips applied to multi-modal fine-tuning
15
Playlists to watch
16
Trelis repo overview
17
ADVANCED Fine-tuning repo Trelis.com/ADVANCED-fine-tuning
18
Training on completions only
19
ADVANCED fine-tuning repo CONTINUED
20
ADVANCED vision Trelis.com/ADVANCED-vision
21
ADVANCED inference trelis.com/enterprise-server-api-and-inference-guide/
22
ADVANCED transcription trelis.com/ADVANCED-transcription
23
Support + Resources Trelis.com/About
Description:
Discover ten essential tips for fine-tuning machine learning models in this informative video. Learn strategies like starting with small models, using LoRA or QLoRA, creating manual datasets, and implementing validation splits. Explore advanced techniques such as unsupervised fine-tuning and preference fine-tuning (ORPO). Gain insights on scaling up your training process, using logging tools, and applying these tips to multi-modal fine-tuning. Access additional resources including code repositories, advanced guides for vision, inference, and transcription, as well as support channels to enhance your fine-tuning skills.

Top Ten Tips for Fine-tuning Large Language Models

Trelis Research
Add to list
0:00 / 0:00