Главная
Study mode:
on
1
Fine-tuning and Distilling with OpenAI
2
Video Overview and Colab Notebook
3
Why bother with distilling or fine-tuning?
4
How is distillation different to fine-tuning?
5
Simple Approach to Distillation
6
Installation, student and teacher model selection
7
Set up training questions
8
Setting up simple evaluation
9
Generating and storing a distilled dataset
10
Running fine-tuning on the distilled dataset
11
Evaluating the fine-tuned model
12
Advanced techniques to generate larger datasets
13
Results from comprehensive fine-tuning and distillation from got-4o to gpt-4o-mini
14
Notes on OpenAI Evals and Gemini fine-tuning
Description:
Explore the differences between OpenAI fine-tuning and distillation in this 23-minute video tutorial. Learn why these techniques are valuable and how distillation differs from fine-tuning. Follow along with a free Colab notebook to implement a simple distillation approach, including model selection, training question setup, evaluation, dataset generation, and fine-tuning. Discover advanced techniques for creating larger datasets and review comprehensive results from fine-tuning and distilling GPT-4 to GPT-4-mini. Gain insights on OpenAI Evals and Gemini fine-tuning, with additional resources provided for synthetic data preparation and model distillation from scratch.

OpenAI Fine-tuning vs Distillation - Techniques and Implementation

Trelis Research
Add to list
0:00 / 0:00