Houston Independent School District used a proprietary AI algorithm called EVAAS to evaluate teacher performance based on student test scores, but teachers could not verify the accuracy of their ratings due to the algorithm's trade secret status, leading to a federal lawsuit alleging due process violations.
Houston Independent School District (HISD), the seventh largest school district in the United States with over 215,000 students and 283 schools, implemented the SAS Institute's Educational Value-Added Assessment System (EVAAS) in 2012 under a license agreement signed in 2011. The system used a proprietary algorithm to track teachers' impact by comparing their students' test results to statewide averages for students in that grade or course. Shortly after implementation, HISD announced a goal of firing 85 percent of teachers rated as ineffective by the system. The Houston Federation of Teachers Local 2415 and six teachers filed a federal lawsuit in April 2014, arguing the system violated their Fourteenth Amendment procedural due process rights because the SAS Institute deemed its algorithms trade secrets and refused to share them with HISD or teachers. This prevented teachers from knowing if errors in the program decreased their scores or from challenging inaccurate evaluations. U.S. Magistrate Judge Stephen Smith ruled in May that the procedural due process claims could proceed, noting that algorithms are subject to error and teachers deserved the opportunity to independently test the accuracy of their evaluations. HISD terminated its contract with SAS in 2016 but did not replace the system, and the district indicated it was investigating options for future value-added modeling.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed