Skip to main content
BackFine-tuning related (Unexpected competence in fine-tuned versions of the upstream model)
Home/Risks/Gipiškis2024/Fine-tuning related (Unexpected competence in fine-tuned versions of the upstream model)

Fine-tuning related (Unexpected competence in fine-tuned versions of the upstream model)

Sub-category
Risk Domain

AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.

"Downstream deployers may often fine-tune a GPAI model with specific deploy- ment-related datasets, to better suit the task. Fine-tuned upstream models can gain new or unexpected capabilities that the underlying upstream models did not exhibit [202, 126, 137]. These new capabilities may be unanticipated by the original model developer."(p. 14)

Other risks from Gipiškis2024 (144)