Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Grab it
Explore advanced fine-tuning optimization techniques for large language models in this comprehensive video tutorial. Delve into the intricacies of LoRA (Low-Rank Adaptation) and its improvements, including DoRA (Double-Rank Adaptation), NEFT (Noisy Embeddings for Fine-Tuning), LoRA+, and Unsloth. Learn how these methods work, their advantages, and practical implementations through detailed explanations and notebook walk-throughs. Compare the effectiveness of each technique and gain insights on choosing the best approach for your fine-tuning needs. Access provided resources, including GitHub repositories, slides, and research papers, to further enhance your understanding and application of these cutting-edge optimization strategies.
Fine-tuning Optimizations - DoRA, NEFT, LoRA+, and Unsloth