Skip to main content
Home/Risks/DSIT (2023)/Loss of control

Loss of control

Capabilities and Risks from Frontier AI

DSIT (2023)

Category
Risk Domain

Delegating by humans of key decisions to AI systems, or AI systems that make decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory, or becoming cognitively enfeebled.

"Humans may increasingly hand over control of important decisions to AI systems, due to economic and geopolitical incentives. Some experts are concerned that future advanced AI systems will seek to increase their own influence and reduce human control, with potentially catastrophic consequences - although this is contested."(p. 25)

Supporting Evidence (1)

1.
"There are broadly two factors that could contribute to loss of control: ● Humans increasingly hand over control of important decisions to AIs. It becomes increasingly difficult for humans to take back control. ● AI systems actively seek to increase their own influence and reduce human control."(p. 26)

Sub-categories (10)

Degradation of the information environment

"Frontier AI can cheaply generate realistic content which can falsely portray people and events. There is potential risk of compromised decision-making by individuals and institutions who rely on inaccurate or misleading publicly available information, as well as lower overall trust in true information."

3.2 Pollution of information ecosystem and loss of consensus reality
OtherOtherOther

Labour market disruption

"Economists view disruption and displacement in labour markets as one of the risks through which rapid advances in AI may affect citizens and reduce social welfare.170"

6.2 Increased inequality and decline in employment quality
OtherOtherOther

Dual Use Science risks

"Frontier AI systems have the potential to accelerate advances in the life sciences, from training new scientists to enabling faster scientific workflows. While these capabilities will have tremendous beneficial applications, there is a risk that they can be used for malicious purposes, such as for the development of biological or chemical weapons."

4.2 Cyberattacks, weapon development or use, and mass harm
HumanIntentionalPost-deployment

Cyber

"As the programming abilities of AI systems continue to expand, frontier AI is likely to significantly exacerbate existing cyber risks. Most notably, AI systems can be used by potentially anyone to create faster paced, more effective and larger scale cyber intrusion via tailored phishing methods or replicating malware. Frontier AI’s effect on the overall balance between cyber offence and defence is uncertain, as these tools also have many applications in improving the cybersecurity of systems and defenders are mobilising significant resources to utilise frontier AI for defensive purposes.209 In the future, we may see AI systems both conducting and defending against cyberattacks with reduced human oversight at each step."

4.2 Cyberattacks, weapon development or use, and mass harm
HumanIntentionalPost-deployment

Disinformation and Influence Operations

"In addition to unintentional degradation of the information environment (discussed in the section on Societal Harms above), frontier AI can be misused to deliberately spread false information to create disruption, persuade people on political issues, or cause other forms of harm or damage."

4.1 Disinformation, surveillance, and influence at scale
HumanIntentionalPost-deployment

Humans might increasingly hand over control to misaligned AI systems

"Organisations around the world are already deploying misaligned AI systems that are causing harm in unexpected ways.250 Recommendation algorithms increase the consumption of extremist content.251 Medical algorithms have been known to misdiagnose US patients,252 and recommend incorrect prescriptions.253 Still, we hand over more control to them, often because they are still as - or more - effective than human decision making, or because they are cheaper."

5.2 Loss of human agency and autonomy
HumanUnintentionalOther

Future AI systems might actively reduce human control

"Loss of control could be accelerated if AI systems take actions to increase their own influence and reduce human control. This threat model is controversial - experts in AI significantly disagree on how likely it is and those who deem it is likely disagree on the timeframe."

7.1 AI pursuing its own goals in conflict with human goals or values
AI systemOtherPost-deployment

Capabilities that could be used to reduce human control - Manipulation

"There is evidence that language models tend to respond as though they share the user’s stated views, and larger models do this more than smaller ones.276 The ability to predict people’s views and generate text that they will endorse could be useful for manipulation."

7.2 AI possessing dangerous capabilities
OtherIntentionalPost-deployment

Capabilities that could be used to reduce human control - Cyber offence

"Instead of - or in addition to - manipulating humans, AI systems could acquire influence by exploiting vulnerabilities in computer systems. Offensive cyber capabilities could allow AI systems to gain access to money, computing resources, and critical infrastructure. As discussed earlier in this report, frontier AI is already lowering the barrier for threat actors and future AI agents may be able to execute cyber attacks autonomously.":

7.2 AI possessing dangerous capabilities
AI systemIntentionalPost-deployment

Capabilities that could be used to reduce human control - Autonomous replication and adaptation

"Controlling AI systems could become much harder if they could autonomously persist, replicate, and adapt in cyberspace. No current AI systems have this capability, but recent research found that frontier AI agents can perform some relevant tasks.279"

7.2 AI possessing dangerous capabilities
AI systemOtherOther

Other risks from DSIT (2023) (12)