Skip to main content
BackFine-tuning related (Catastrophic forgetting due to continual instruction fine-tuning)
Home/Risks/Gipiškis2024/Fine-tuning related (Catastrophic forgetting due to continual instruction fine-tuning)

Fine-tuning related (Catastrophic forgetting due to continual instruction fine-tuning)

Sub-category
Risk Domain

AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.

"Catastrophic forgetting occurs when a model loses its ability to retain previously learned tasks (or factual information) after being trained on new ones. In language models, this can occur due to continual instruction tuning. This tendency may become more pronounced as the model’s size increases [127]."

Other risks from Gipiškis2024 (144)