Information Science Risks
Risks of AI Scientists: Prioritizing Safeguarding Over Autonomy
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
"These risks pertain to the misuse, misinterpretation, or leakage of data, which can lead to erroneous conclusions or the unintentional dissemination of sensitive information, such as private patient data or proprietary research. Recent research has demonstrated how LLMs can be exploited to generate malicious medical literature that poisons knowledge graphs, potentially manipulating downstream biomedical applications and compromising the integrity of medical knowledge discovery [28]. Such risks are pervasive across all scientific domains."(p. 7)
Supporting Evidence (1)
Examples: "Misinformation campaign Data privacy breaches Bias propogation"(p. 4)
Other risks from Tang2025 (7)
Chemical Risks
4.2 Cyberattacks, weapon development or use, and mass harmBiological Risks
4.2 Cyberattacks, weapon development or use, and mass harmRadiological Risks
7.3 Lack of capability or robustnessPhysical (Mechanical ) Risks
7.3 Lack of capability or robustnessMalicious and Direct
4.0 Malicious Actors & MisuseMalicious and Indirect
4.0 Malicious Actors & Misuse