Главная
Study mode:
on
1
Improving on LoRA
2
Video Overview
3
How does LoRA work?
4
Understanding DoRA
5
NEFT - Adding Noise to Embeddings
6
LoRA Plus
7
Unsloth for fine-tuning speedups
8
Comparing LoRA+, Unsloth, DoRA, NEFT
9
Notebook Setup and LoRA
10
DoRA Notebook Walk-through
11
NEFT Notebook Example
12
LoRA Plus
13
Unsloth
14
Final Recommendation
Description:
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only! Grab it Explore advanced fine-tuning optimization techniques for large language models in this comprehensive video tutorial. Delve into the intricacies of LoRA (Low-Rank Adaptation) and its improvements, including DoRA (Double-Rank Adaptation), NEFT (Noisy Embeddings for Fine-Tuning), LoRA+, and Unsloth. Learn how these methods work, their advantages, and practical implementations through detailed explanations and notebook walk-throughs. Compare the effectiveness of each technique and gain insights on choosing the best approach for your fine-tuning needs. Access provided resources, including GitHub repositories, slides, and research papers, to further enhance your understanding and application of these cutting-edge optimization strategies.

Fine-tuning Optimizations - DoRA, NEFT, LoRA+, and Unsloth

Trelis Research
Add to list
0:00 / 0:00