Generative AI is trained on vast bodies of internet data, including text and images. Frequently, this data contains original, copyright-protected works that have been obtained without authorisation. This may present a risk to authors if users extract these works verbatim from the system's data. Relatedly, models may produce content that does not, in a strict sense, unlawfully copy an author's work but benefits substantially from its unique style, method, or genre. If models are able to produce synthetic replacements for such work at a speed and scale that surpasses humans, this may jeopardize the ability of creators to earn an income and stymie human innovation and creativity.
Several synergistic risks arise from the widespread dissemination and use of AI-generated cultural products. Because AIs optimize for repeated patterns in their training data, it is possible that their works will lack the diversity and unpredictability often celebrated in human works. Where synthetic works are adopted on a large enough scale, this could homogenize cultural experiences. Similarly, AIs do not understand the contextual significance of the cultural elements that they use. If AI enables the extensive commodification of certain products, it may expropriate their cultural value.
Excerpt from the MIT AI Risk Repository full report
AI systems capable of creating economic or cultural value, including through reproduction of human innovation or creativity (e.g., art, music, writing, coding, invention), destabilizing economic and social systems that rely on human effort. The ubiquity of AI-generated content may lead to reduced appreciation for human skills, disruption of creative and knowledge-based industries, and homogenization of cultural experiences.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
ByteDance released Seedance 2.0, an AI video generation tool that created realistic deepfake videos of celebrities like Tom Cruise and Brad Pitt without authorization, prompting widespread condemnation from Hollywood organizations and legal action for copyright infringement and unauthorized use of likenesses.
Developers: Bytedance
Deployers: Bytedance, Ruairi Robinson
The New York Times sued Perplexity AI for repeatedly violating its copyrights by using the publication's content without permission to generate responses and compete with The Times' offerings.
Developers: Perplexity AI
Deployers: Perplexity AI
Two books by distinguished New Zealand authors were disqualified from the 2026 Ockham New Zealand Book Awards because their covers were created using AI-generated artwork, violating the competition's new AI regulations.
Developers: Unknown AI Image Generator Developer
Deployers: Sugarcube Studios
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
33 shared governance docs
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms
28 shared governance docs
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
27 shared governance docs
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
26 shared governance docs
Guides AI developers and users in California on compliance with existing laws governing consumer protections, data protections, civil rights protections, competition laws, and for new AI-specific laws.
Requires the Superintendent to convene a working group to develop guidance and a model policy on AI's safe use in education by 2026. Mandates reporting findings by 2027 and repeals provisions by 2031.
Amends California Civil Code to require consent for using deceased personalities' likenesses in AI-generated replicas. Exempts certain expressive works. Imposes liability for unauthorized use with fines up to $10,000. Specifies consent requirements and transferrable rights for digital likenesses.