AI systems are increasingly automating many human tasks, potentially leading to significant job losses. If AI is able to provide large-scale labor that is less expensive and more effective than human labor, it could take over major industries (e.g., manufacturing, crowdwork platforms, software engineering), causing mass unemployment. This displacement of labor could worsen existing social and economic inequalities, as those most vulnerable to automation are likely to currently occupy positions of disadvantage. New disparities may also arise between those who are able to adapt their skills to complement AI systems and those who are not.
Aside from the availability of jobs, AI automation may negatively impact job quality and security. The roles that remain after widespread automation could be more monotonous and less engaging as AI takes on more complex tasks. Furthermore, the threat of replacement by AI could result in exploitative dependencies between human workers and their employers. In order to remain competitive with faster, more knowledgeable AI assistants, human workers may be pressured to accept lower wages, fewer benefits, and poorer working conditions. Generative AI companies have a history of exploiting dispensable workers (e.g., refugees, prisoners, low-income individuals) for crowdwork that is fraught and unfair.
Excerpt from the MIT AI Risk Repository full report
Social and economic inequalities caused by widespread use of AI, such as by automating jobs, reducing the quality of employment, or producing exploitative dependencies between workers and their employers.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
A Polish radio station replaced human journalists with AI-generated presenters and conducted fake interviews with deceased cultural figures, including Nobel Prize winner Wislawa Szymborska, sparking public outrage before ending the experiment.
Developers: OpenAI, Elevenlabs, Leonardo AI
Deployers: Off Radio Krakow, Mariusz Marcin Pulit
AI chatbots used in hiring processes at multiple fast food restaurants created barriers to employment by failing to properly schedule interviews, miscommunicating applicant availability, and creating confusing multi-step application processes.
Developers: Wendy's, Mcdonald's, Hardee's
Deployers: Wendy's, Mcdonald's, Hardee's
OpenAI used outsourced Kenyan workers earning less than $2 per hour to label extremely disturbing content including child sexual abuse, violence, and torture to build safety filters for ChatGPT, causing severe psychological trauma to workers.
Developers: OpenAI
Deployers: OpenAI
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
73 shared governance docs
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
69 shared governance docs
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
64 shared governance docs
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
62 shared governance docs
Encourages AI innovation by removing regulations, revising funding based on states' AI climate, and reviewing FTC actions. Promotes free speech in AI systems, revises procurement guidelines, and evaluates international AI models. Supports open-source AI use, workforce retraining, and safeguards against deepfakes. Advances AI infrastructure development, cybersecurity, international diplomacy, and semiconductor manufacturing. Prioritizes AI R&D, interpretability, evaluations, national security assessments, and biosecurity measures.
Establishes a Task Force to implement AI education policy, promoting AI literacy and training for educators. Launches a Presidential AI Challenge and prioritizes AI in K-12 instruction through partnerships. Enhances teacher training and apprenticeships related to AI across various sectors.
Guides AI developers and users in California on compliance with existing laws governing consumer protections, data protections, civil rights protections, competition laws, and for new AI-specific laws.