Students at multiple universities discovered their professors were using ChatGPT to generate course materials, grade assignments, and provide feedback without properly disclosing this use, leading to formal complaints about academic dishonesty and concerns about educational quality.
Multiple incidents occurred at universities where professors used ChatGPT and other AI tools in ways that violated institutional policies or student expectations. At Northeastern University, student Ella Stapleton discovered her business professor had used ChatGPT to generate lecture notes and slides, with visible prompts like 'expand on all areas. Be more detailed and specific' remaining in the documents. The materials also contained AI-generated artifacts like distorted text and misspellings. At Southern New Hampshire University, a student named Marie received an A grade on an essay but discovered her professor had accidentally posted a ChatGPT conversation showing the AI was used to grade her work and generate feedback. The professor had asked ChatGPT to provide 'really nice feedback' for Marie using a specific grading rubric. Stapleton filed a formal complaint requesting over $8,000 in tuition reimbursement, citing undisclosed AI use and violation of the course syllabus which prohibited unauthorized use of AI. Her complaint was ultimately denied. Marie confronted her professor and later transferred universities after a second professor also appeared to use ChatGPT for grading. A 2024 survey found 18% of higher education instructors described themselves as frequent AI users, nearly doubling from the previous year. The incidents highlight growing tensions between student expectations of human instruction and professors' increasing reliance on AI tools for efficiency.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
Human
Due to a decision or action made by humans
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed