In-context learning is the ability of a large language model to learn and adapt its behavior based on examples or instructions provided directly within the prompt, without any changes to the model's underlying weights. This capability allows users to teach the model new tasks on-the-fly simply by demonstrating the desired behavior in the input.
You show the model a pattern: "EN: Hello → FR: Bonjour. EN: Thank you → FR: Merci. EN: Good morning → FR:" and the model responds with "Bonjour" having learned the translation task from the examples alone.
Build polished, copy-ready prompts in under 60 seconds with SurePrompts.
Try SurePrompts