Skip to main content

Prompt Tuning

Prompt tuning is a parameter-efficient technique that adapts a large language model to specific tasks by training small learnable vectors called "soft prompts" that are prepended to the input. Unlike full fine-tuning which updates billions of model weights, prompt tuning keeps the entire model frozen and only optimizes these compact vectors — often less than 0.1% of total parameters.

Example

A company wants their LLM to excel at classifying customer feedback into categories. Instead of fine-tuning the entire model, they train a soft prompt — a small set of numerical vectors — that guides the frozen model to classify feedback accurately. The soft prompt file is only a few kilobytes, compared to gigabytes for a fully fine-tuned model.

Put this into practice

Build polished, copy-ready prompts in under 60 seconds with SurePrompts.

Try SurePrompts