Skip to main content
Home/Risks/Gipiškis2024/Knowledge conflicts in retrieval-augmented LLMs

Knowledge conflicts in retrieval-augmented LLMs

Sub-category
Risk Domain

AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.

"AI models can be particularly sensitive to coherent external evidence, even when they come into conflict with the models’ prior knowledge. This may lead to models producing false outputs given false information during the retrieval- augmentation process, despite only a relatively small amount of false informa- tion input that is inconsistent with the model’s prior knowledge trained on much larger amounts of data [220]."(p. 28)

Other risks from Gipiškis2024 (144)