Train Your Own LORA Model for Video-Making | A1111 Dreambooth Extension | 6GB VRAM
This article is a summary of a YouTube video "LORA for Stable Diffusion - A1111 Dreambooth Extension - 6GB VRAM!" by Nerdy Rodent
TLDR Train your own Laura model to generate images with various options and parameters for video-making.
Installing Laura requires setting a custom requirements file and environment variable to skip install, and merging a 6MB file with another checkpoint to create a 2.6GB checkpoint.
Using Dreambooth with minimal settings and 8-bit Adam can reduce VRAM usage to 5GB and achieve a 0.244 loss in 20 minutes.
Create a model with the desired settings, and use a concept list and Laura for stable diffusion 2.1 with a resolution of 768.
Set up Dream Booth with fp16, memory attention xformers, cached latents, text encoder, and data set directory for 10 images with corresponding text files, and optionally specify a directory for classification data set.
Train your own Laura model to generate images with various options and parameters.
EMA model successfully generated a variety of different prompts with realistic results, showing no significant difference in quality when using EMA with the checkpoint on or off.
Impressionist art style painting preserves features from the concepts with low vram, but is not preferred.
Decide which version of the video-making model works best for you - the original, EMA, or Laura.