AI-generated images depicting fake newsworthy events were sold on Adobe Stock and other platforms, with some being used on websites without proper labeling as artificial content.
Adobe Stock, a major stock image marketplace, began accepting AI-generated images in late 2022 and accumulated thousands of photorealistic fake images depicting real newsworthy events including the Israel-Gaza war, Ukraine conflict, Black Lives Matter protests, and Maui wildfires. Searches revealed over 3,000 AI-generated images labeled as depicting Gaza, over 15,000 fake Ukraine war images, and hundreds depicting other real events. Some AI-generated images appeared without proper AI labels, violating company guidelines. At least one AI-generated image of a Gaza explosion was used on multiple websites without indication it was fake, though it was quickly debunked. After media scrutiny, Adobe announced new policies Tuesday prohibiting AI images whose titles imply they depict newsworthy events and requiring clearer labeling. The incident highlights concerns about AI-generated misinformation spreading through stock image platforms to blogs, marketing materials, and social media, potentially blurring lines between fiction and reality as Americans increasingly consume news from social media platforms.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed