How to fit finetuning settings into 24 gb VRAM consumer GPU
10
Local finetune with Adafactor settings
11
Min SNR Gamma paper
12
Installing local Tensorboard to view event logs
13
Runpod overview
14
How much Runpod costs
15
Runpod finetune settings
16
Weights and Biases overview
17
Determining the initial learning rate for AdamW finetune
18
Adding a sample prompt to training settings to visually gauge training progress
19
Checking AdamW finetune sample images
20
Efficiency nodes for XY plot
21
How to retrieve your models from Runpod
22
Evaluating finetune XY plot
23
D-Adaptation overview
24
D-Adaptation training settings
25
Decoupled Weight Decay Regularization paper
26
What does weight decay do?
27
Betas and Growth Rate
28
drhead's choice of hyperparameters
29
LoRA network dimensions and alpha
30
Tensorboard analysis for D-Adaptation LoRA
31
D-Adaptation sample images analysis
32
Prodigy repository
33
Prodigy training settings
34
How to enable cosine annealing
35
Prodigy training settings version 2
36
Prodigy code deep dive
37
Why I didn't use any warmup for Prodigy training settings
38
Weights and Biases analysis for Prodigy
39
Prodigy sample images analysis
40
Prodigy XY plot
41
Prodigy AdamW and Higher Weight Decay analysis
42
Prodigy final version XY plot
43
Closing thoughts
44
CivitAI SDXL Competition
Description:
Dive into an in-depth tutorial on training SDXL 1.0 using various techniques including finetuning, LoRA, D-Adaptation, and Prodigy. Explore the differences between SDXL 1.0 and SD 1.5 models, learn about dataset preparation, and master ComfyUI node setup. Discover how to implement local finetuning with Adafactor, utilize Runpod for cloud-based training, and leverage Weights and Biases for progress tracking. Gain insights into advanced concepts like Min SNR Gamma, Decoupled Weight Decay Regularization, and optimal hyperparameter selection. Compare different training approaches, analyze sample images, and create XY plots to evaluate results. Perfect for those looking to enhance their Stable Diffusion skills and participate in competitions like CivitAI SDXL.
Stable Diffusion- Training SDXL 1.0 - Finetune, LoRA, D-Adaptation, Prodigy