Главная
Study mode:
on
1
Comparing full fine-tuning and LoRA fine tuning
2
Video Overview
3
Comparing VRAM, Training Time + Quality
4
How full fine-tuning works
5
How LoRA works
6
How QLoRA works
7
How to choose learning rate, rank and alpha
8
Choosing hyper parameters for Mistral 7B fine-tuning
9
Specific tips for QLoRA, regularization and adapter merging.
10
Tips for using Unsloth
11
LoftQ - LoRA aware quantisation
12
Step by step TinyLlama QLoRA
13
Mistral 7B Fine-tuning Results Comparison
14
Wrap up
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore the differences between full fine-tuning and (Q)LoRA techniques in this comprehensive 53-minute video from Trelis Research. Learn about VRAM requirements, training time, and quality comparisons for various fine-tuning methods. Dive into the mechanics of full fine-tuning, LoRA, and QLoRA, and discover how to select optimal learning rates, ranks, and alpha values. Gain insights on hyperparameter selection for Mistral 7B fine-tuning, along with specific tips for QLoRA, regularization, and adapter merging. Explore the benefits of Unsloth and LoftQ for LoRA-aware quantization. Follow a step-by-step guide for TinyLlama QLoRA implementation and compare Mistral 7B fine-tuning results across different methods.

Full Fine-Tuning vs LoRA and QLoRA - Comparison and Best Practices

Trelis Research
Add to list