Главная
Study mode:
on
1
Enhance Cost Efficiency in Domain Adaptation with PruneMe
Description:
Discover a layer pruning technique for Large Language Models (LLMs) that enhances cost efficiency in domain adaptation in this 17-minute conference talk from MLOps World: Machine Learning in Production. Explore the PruneMe repository, inspired by "The Unreasonable Ineffectiveness of the Deeper Layers," and learn how removing redundant layers facilitates continual pre-training on streamlined models. Understand the process of merging these models into a top-performing general model using advanced techniques like Evolve Merging. Gain insights into this cost-effective approach to model optimization and adaptation, presented by Shamane Siri, Ph.D., Head of Applied NLP Research at Arcee.ai.

Enhance Cost Efficiency in Domain Adaptation with PruneMe

MLOps World: Machine Learning in Production
Add to list
0:00 / 0:00