Skip to main content

Logits

Logits are the raw, unnormalized numerical scores that a language model assigns to each token in its vocabulary as the potential next token. Before being converted into probabilities through a softmax function, logits represent the model's relative confidence in each option. Accessing logits directly enables advanced techniques like constrained decoding, custom sampling strategies, and classifier-free guidance.

Example

When generating the next word after "The capital of France is", the model produces logits like: "Paris" → 12.4, "Lyon" → 5.1, "the" → 3.8, "Berlin" → 2.1. After softmax, "Paris" gets ~95% probability. The raw logit values (12.4, 5.1, etc.) are the logits — temperature and top-p then modify these before final token selection.

Put this into practice

Build polished, copy-ready prompts in under 60 seconds with SurePrompts.

Try SurePrompts