A team of personal assistant software agents called Electric Elves were deployed in an office environment for seven months, raising unanticipated research issues around privacy, adjustable autonomy, and social norms that the project had to address.
The Electric Elves (EElves) project deployed a team of almost a dozen personal assistant agents at the Information Sciences Institute at the University of Southern California from June 2000 to December 2000. Each agent acted as an assistant to one person and aided in daily activities in an actual office environment. The project was originally designed to focus on team coordination among software agents, but several unanticipated research issues emerged during deployment. The report specifically mentions that several things 'went wrong' during the project, including issues with privacy, adjustable autonomy (agents dynamically adjusting their own level of autonomy), and social norms in office environments. The report does not provide specific details about the nature of the problems or their impacts, but indicates these issues were significant enough to shift the research focus and inspire continued research to address the concerns raised.
Domain classification, causal taxonomy, severity scores, and national security assessments were LLM-classified and may contain errors.
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
AI system
Due to a decision or action made by an AI system
Unintentional
Due to an unexpected outcome from pursuing a goal
Post-deployment
Occurring after the AI model has been trained and deployed