The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

How the UK should prepare for AI risks

To ensure AI truly benefits the UK, we must prepare for the risks it poses

Jess Whittlestone, Jade Leung and Markus Anderljung

As the UK slowly begins to emerge from the tragedy of the Covid-19 pandemic, which has cost tens of thousands of lives and over £300 billion in 2020 alone, we need to ask ourselves an important question - what other extreme risks lie on the horizon? 

Climate change is a clear contender. The threat of a nuclear exchange is another. But what about the risks posed by emerging technologies, and specifically artificial intelligence? 

If anyone you know doubts the astonishing increase in the capability of AI systems in recent years, ask them to watch DeepMind’s AlphaGo machine defeat the world champion in 2016. Or to play around with OpenAI’s new language software, GPT-3, which generates human-like written content in a way that nothing ever has before. 

AI progress is rapid, unprecedented, and irreversible. Its capacity to shape human progress is profound - both for better and for worse. On the risk side of the ledger, what is perhaps most striking is that we do not even need to look at what AI might do in the future to see the extreme risks it poses. The widespread deployment of even our current AI capabilities could lead or contribute to extreme risks. 

The risks posed by today’s AI break down into three broad categories. Firstly we have misuse risks, which result from using AI in an unethical manner – such as generating fake video content to sow the seeds of political disruption. Secondly we have accident risks - think self-driving car collisions, or the failure of an energy system into which AI has been integrated. Finally we have structural risks. Widespread use of AI systems could exacerbate existing inequalities by locking in patterns of historical discrimination, provoking rapid and wide-scale unemployment, or dramatically concentrating power in the hands of a few companies and states.

And then, of course, come the risks posed by the AI of tomorrow. If AI reaches general human-level intelligence in the coming decades, what level of risk will it pose then? Such technology will likely be highly beneficial to humanity in countless ways, but a human-level AI that is not aligned with human objectives and values will clearly constitute an extreme risk.

The UK Government has so far demonstrated laudable proactivity in the field of AI policy and governance, establishing the Office for Artificial Intelligence, the Centre for Data Ethics and Innovation (CDEI) and the AI Council, and becoming a member of the Global Partnership on AI. It is also developing the UK’s first AI strategy.  

These initiatives, and many others, represent an encouraging start. But the UK’s efforts to mitigate AI risks remain incomplete in some areas, and embryonic in others. As we noted in Future Proof, a report launched today by the Centre for Long-Term Resilience and Oxford’s Toby Ord, policymakers need to act now to ensure that AI is developed, used and governed responsibly.

Firstly, the UK Government should establish its own capacity to anticipate and monitor AI progress and its implications for society. As AI capabilities grow, the UK risks falling behind if it does not develop its own capability in this area. Such monitoring will help inform future AI policy and regulation to help manage the societal implications of AI. It will also mitigate the risks of increasingly widely deployed AI applications in critical areas.

Secondly, the Government urgently needs to bring more technical AI expertise into Government. As AI systems become more capable, their impacts will grow and become more cross-cutting, increasing the need for technical expertise across the UK Government. Such expertise is currently sorely lacking. This could be achieved in a number of ways, including by setting up a TechCongress-equivalent scheme, aimed at enabling the UK Government to recruit and gain access to AI expertise in fields like AI governance and ethics. The new Number 10 Innovation Fellowship represents an encouraging start, but much more needs to be done to plug the skills gap that currently exists. 

Thirdly, the UK needs to become a world leader in the development of safe and responsible AI - whether via the Alan Turing Institute, through the new research agency Aria, or through the autonomous systems research hub at Southampton University. Promoting research, including technical AI safety research, is critically important - not only due to the dangers of unsafe systems, but because it will bolster the UK’s competitiveness as states and companies seek to acquire safe and beneficial AI systems.

The profound risks we face from AI may sound abstract and far-fetched, but so did the concept of a global pandemic at the start of last year. There is an old adage that the unfamiliar is not the same as the improbable. As we build back from Covid-19, we have a remarkable opportunity to boost our resilience not just to the next pandemic, but to the range of other extreme risks we face. We must seize this opportunity. 

.     .     .

Dr Jess Whittlestone is a Senior Research Fellow, AI Ethics and Policy at the Centre for the Study of Existential Risk, University of Cambridge 

Dr Jade Leung is a Governance and Policy Advisor at OpenAI and Board Member of the Centre for Long-Term Resilience

Markus Anderljung is Project Manager, Operation & Policy Engagement, at the Centre for the Governance of AI, University of Oxford

Next article

Upcoming conference on radical revisions of AI