Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore advanced techniques for fine-tuning large language models with minimal parameters in this 55-minute video from Trelis Research. Delve into the intricacies of ReFT (Representation Fine-Tuning) and LoRA (Low-Rank Adaptation) methodologies, starting with a comprehensive review of transformer architecture. Learn the practical aspects of weight fine-tuning using LoRA, followed by an in-depth look at Representation Fine-tuning. Compare these two approaches and understand their respective strengths. Get hands-on experience with step-by-step walkthroughs for both LoRA and ReFT fine-tuning processes, including GPU setup considerations. Discover techniques for combining ReFT fine-tunes and explore the concept of orthogonality in fine-tuning. Gain insights into the limitations of LoReFT and LoRA fine-tuning, and conclude with valuable tips to enhance your fine-tuning skills. Access additional resources, including complete scripts, one-click fine-tuning templates, and community support to further your learning journey.
Read more