- Intro/Preview of character artwork created with the help of stable diffusion
2
- Some context about environment/models I'm using
3
- Short rundown on how to use BIRME for preprocessing
4
- How to train a hypernetwork
5
- Temporarily paused the training to change preview options
6
- Initial analysis of results from hypernetwork
7
- Comparing textual inversion and hypernetworks on TXT2IMG results
8
- How to enable hypernetworks
9
- Comparing textual inversion and hypernetworks on IMG2IMG
10
- Lighting in photoshop
11
- Bringing lightly edited image back into img2img inpaint
12
- Short break to show current steps in work in process
13
- Turning imperfections into new features using img2img inpaint
14
- Updated composition
15
- How to light the final image
16
- Changing the face
17
- Changing the face again!
18
- Closing thoughts
Description:
Explore the differences between hypernetwork and textual inversion techniques in Stable Diffusion for creating stylized character art in this comprehensive 33-minute video tutorial. Learn how to train a hypernetwork, preprocess images using BIRME, and compare the results of both techniques in TXT2IMG and IMG2IMG workflows. Follow along as the process of generating, refining, and editing character designs is demonstrated using Stable Diffusion, Photoshop, and various AI tools. Gain insights into lighting techniques, composition updates, and face alterations to achieve the desired final artwork. Perfect for digital artists, concept designers, and AI art enthusiasts looking to enhance their character creation skills.
Stable Diffusion Style Technique Comparison - Hypernetwork vs. Textual Inversion