Essential terms and concepts every prompt engineer should know. Browse 25 key definitions with examples and practical tips.
Chain of thought prompting is a technique that encourages an AI model to break down complex reasoning into sequential, intermediate steps before arriving at a final answer.
A context window is the maximum amount of text (measured in tokens) that an AI model can process in a single interaction, including both the input prompt and the generated output.
Few-shot prompting is a technique where you provide the AI model with a small number of examples (typically 2-5) within the prompt to demonstrate the desired format, style, or reasoning pattern.
Fine-tuning is the process of further training a pre-trained AI model on a specific dataset to specialize its behavior for particular tasks or domains.
Grounding is the practice of anchoring AI responses to specific, verifiable sources of information such as documents, databases, or real-time data.
A hallucination occurs when an AI model generates information that sounds plausible but is factually incorrect, fabricated, or unsupported by its training data.
In-context learning is the ability of a large language model to learn and adapt its behavior based on examples or instructions provided directly within the prompt, without any changes to the model's underlying weights.
Instruction tuning is a training technique where a pre-trained language model is further trained on a curated dataset of instruction-response pairs to improve its ability to follow natural language instructions.
A large language model (LLM) is an AI system trained on massive amounts of text data that can understand, generate, and reason about natural language.
Multi-modal AI refers to artificial intelligence systems that can process and generate content across multiple types of data — such as text, images, audio, and video — within a single model.
Negative prompting is a technique where you explicitly tell the AI model what to avoid, exclude, or not do in its response.
Persona prompting is a technique where you ask the AI to adopt a specific identity, personality, or character to shape the tone, vocabulary, and perspective of its responses.
Prompt chaining is a strategy where you break a complex task into a sequence of simpler prompts, feeding the output of one step as input to the next.
Prompt engineering is the practice of designing, refining, and optimizing the text inputs (prompts) given to AI models to elicit the most useful, accurate, and relevant outputs.
Prompt injection is a security vulnerability where a malicious user crafts input that overrides or manipulates the AI model's original instructions, causing it to ignore its guidelines or perform unintended actions.
A prompt template is a reusable, pre-structured prompt with placeholder variables that can be filled in with specific details for each use.
Retrieval-augmented generation (RAG) is an architecture that enhances AI model responses by first retrieving relevant information from an external knowledge base and then including that information in the prompt for the model to reference.
Role prompting is a technique where you assign the AI model a specific professional role or area of expertise to shape the depth, vocabulary, and perspective of its responses.
Self-consistency is a prompting strategy where you generate multiple responses to the same question using chain-of-thought reasoning, then select the most common answer among them.
A system prompt is a special set of instructions provided to an AI model before the user's message that defines the model's behavior, personality, constraints, and response format for the entire conversation.
Temperature is a parameter that controls the randomness and creativity of an AI model's output.
A token is the basic unit of text that AI models use to process and generate language.
Top-P, also known as nucleus sampling, is a parameter that controls which tokens the model considers when generating each word.
Tree of thought prompting is an advanced reasoning technique where the AI model explores multiple branching solution paths simultaneously, evaluates each branch, and backtracks from dead ends before selecting the best path to the answer.
Zero-shot prompting is the simplest prompting approach where you give the AI model a task instruction without providing any examples.