LoRA (Low-Rank Adaptation)
LoRA (Low-Rank Adaptation) is a parameter-efficient fine-tuning technique that adapts a pre-trained AI model to new tasks by injecting small trainable matrices into the model's layers while keeping the original weights frozen. LoRA can reduce the number of trainable parameters by up to 10,000 times compared to full fine-tuning, making it possible to customize large models on consumer-grade hardware.
Example
An artist wants Stable Diffusion to generate images in their unique painting style. Instead of retraining the entire 1-billion parameter model (requiring expensive GPUs), they use LoRA to train a 4 MB adapter file on 50 example paintings. The adapter plugs into the base model at inference time, producing images in the artist's style.
Related Terms
Put this into practice
Build polished, copy-ready prompts in under 60 seconds with SurePrompts.
Try SurePrompts