Catastrophic Forgetting
Catastrophic forgetting is a phenomenon where a neural network rapidly loses previously learned knowledge when it is trained on new data or tasks. Unlike humans who can learn new skills while retaining old ones, AI models tend to overwrite earlier patterns as new training updates their internal weights. This is a core challenge in building AI systems that can continuously learn over time, and techniques like LoRA and replay-based methods aim to mitigate it.
Example
A language model fine-tuned on medical data becomes excellent at answering healthcare questions but suddenly performs poorly on the general knowledge tasks it previously handled well. The medical fine-tuning overwrote the weights responsible for general knowledge — the model "forgot" what it knew in order to learn the new domain.
Put this into practice
Build polished, copy-ready prompts in under 60 seconds with SurePrompts.
Try SurePrompts