At a sufficient level of complexity, it is possible that AI systems could acquire the ability to have subjective experiences, particularly pleasure and pain. Some consciousness researchers and philosophers consider the possibility of sentient AI theoretically feasible. Where AIs become sentient, they may deserve moral consideration and therefore a range of the rights currently afforded to many forms of human, animal, and environmental life. Systems may be mistreated or harmed if these rights are not implemented responsibly or we accidentally or intentionally treat AIs as non-sentient where they are sentient. As AI technology advances, it will become more challenging to assess whether an AI has developed the sentience, consciousness, or self-awareness that would grant it moral status.
Excerpt from the MIT AI Risk Repository full report
Ethical considerations regarding the treatment of potentially sentient AI entities, including discussions around their potential rights and welfare, particularly as AI systems become more advanced and autonomous.
Incident volume relative to governance coverage — each dot is one of 24 subdomains
Entity
Who or what caused the harm
Intent
Whether the harm was intentional or accidental
Timing
Whether the risk is pre- or post-deployment
No recorded incidents for this subdomain.
Risks may still apply even without documented incidents.
Inadequate regulatory frameworks and oversight mechanisms that fail to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
3 shared governance docs
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and unfair representation of those groups.
2 shared governance docs
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or cause loss of confidential intellectual property.
2 shared governance docs
Vulnerabilities that can be exploited in AI systems, software development toolchains, and hardware, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
2 shared governance docs
Prohibits governmental entities in Utah from granting or recognizing legal personhood for artificial intelligence and other non-human entities. Enacts restrictions effective May 1, 2024, reinforcing the exclusive legal personhood of human beings.
Clarifies that artificial intelligence cannot qualify as a "person" for the purpose of North Dakota state law.
Strengthens governance and ethics in AI by establishing legal systems, ethical principles, and review mechanisms. Requires risk assessment, transparency, and compliance with national and international standards. Promotes education and awareness, ensuring responsible AI innovation and preventing ethical violations. Enhances global cooperation.