Microsoft developed an AI algorithm called the Technology Platform for Social Intervention in 2018 that predicted which low-income girls and women in Salta, Argentina would become pregnant as teenagers, using demographic data including ethnicity, disability status, and household conditions to identify individuals as '86 percent predestined' for adolescent pregnancy.
In 2018, Microsoft partnered with the Ministry of Early Childhood in the Argentine province of Salta to develop an AI system called the Technology Platform for Social Intervention that predicted teenage pregnancy. The system analyzed data from 200,000 residents in Salta, including 12,000 women and girls aged 10-19, using demographic variables such as age, ethnicity, country of origin, disability status, and household conditions like access to hot water. Governor Juan Manuel Urtubey announced on national television that the technology could predict 'five or six years in advance, with first name, last name, and address, which girl—future teenager—is 86 percent predestined to have an adolescent pregnancy.' The system deployed 'territorial agents' who visited homes of identified individuals, conducted surveys, took photographs, and recorded GPS locations. The targets were predominantly poor, migrant families from Bolivia and other South American countries, and Indigenous communities including Wichí, Qulla, and Guaraní peoples. Microsoft described this as 'one of the pioneering cases in the use of AI data' in state programs. The system was deployed during Argentina's national abortion debate and was supported by the anti-abortion Conin Foundation led by Dr. Abel Albino. No formal assessment of the system's impact was conducted, and technical reviews by the Applied Artificial Intelligence Laboratory at the University of Buenos Aires found serious methodological errors, including artificially inflated accuracy claims of 98.2 percent. Feminist activists and academics successfully challenged the program through media campaigns and technical critiques, highlighting its violation of women's rights and lack of transparency.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
AI system
Due to a decision or action made by an AI system
Intentional
Due to an expected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed