The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Leverhulme CFI on House of Lords Select Committee Report - 'AI in the UK: Ready, Willing, and Able?’

The Leverhulme Centre for the Future of Intelligence at the University of Cambridge congratulates the House of Lords Select Committee on Artificial Intelligence with the release of their report ‘AI in the UK: Ready, Willing, and Able?’

16 April 2018

The House of Lords Select Committee on Artificial Intelligence has released a 180-page report that considers the economic, ethical and social implications of advances in artificial intelligence.


The report was informed by experts from academia, industry, and policy. The Leverhulme Centre for the Future of Intelligence (LCFI) was closely involved with the Committee’s work, submitting five written evidence documents in 2017 and 2018, and receiving the Committee on a visit to Cambridge in December 2017.


The report references LCFI’s work and accomplishments in several key areas: bias in algorithms, the use of AI in manipulation, malicious use of AI, and the dangers of a potential AI arms race. It also emphasises the exemplary status of LCFI as an interdisciplinary research centre: “Institutes such as the Leverhulme Centre for the Future of Intelligence at the University of Cambridge, and the Oxford Internet Institute at the University of Oxford are excellent, existing, examples of this collaboration, and universities across the UK should encourage the development of their own such centres.”


LCFI’s Executive Director, Dr Stephen Cave, was invited to respond to the Chair of the Committee, Lord Tim Clement-Jones, at the launch of the report today at the Royal Society. He said:


“It is particularly welcome that this report recognises that harnessing the benefits of AI and managing its impact will require a holistic approach: one that both supports the development of the technology, and at the same time looks to channel it to the broadest possible good.


The report recognises that this is an area in which the UK can show real, global leadership. We are leading the way in academic research into ethical AI; we are fortunate to have leading companies that take social responsibility seriously; and this Government has been the first to announce an official Centre for Data Ethics and Innovation.


The tech entrepreneur mantra of the last decade was move fast and break things. But  some things are too important to be broken: like democracy, or equality, or social cohesion. The only kind of innovation that will bring us closer to the society we want, and the only kind a government should support, is therefore responsible innovation. I’m delighted to see the report endorse this so strongly.”

The full text of Dr Cave’s response can be found below.

For more information, please contact CFI’s administrator, Susan Gowans on skg41@cam.ac.uk or +44 (0)1223 765470, or go to www.lcfi.ac.uk.


Note to Editors
LCFI is a collaboration between the University of Cambridge, the University of Oxford, Imperial College London and the University of California at Berkeley and is funded by a £10 million grant from the Leverhulme Trust. Its mission is to create the interdisciplinary community that will be needed to make the AI revolution go as well as possible for humanity. At the Centre’s launch in 2016, the late Professor Stephen Hawking said “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity. We do not yet know which. The research done by this centre will be crucial to the future of our civilisation and of our species.”


This is the full text of Dr Cave’s to the report, as given today at the launch of the report at the Royal Society:


“First, I’d like to congratulate the Committee and the Clerks on a hugely impressive achievement.  The report provides a comprehensive overview of the pressing issues around the development and deployment of AI, which is sure to be widely appreciated.  And it contains no fewer than 74 thoughtful recommendations for how the UK can harness AI’s benefits and manage its impact.

It is particularly welcome that this report recognises that harnessing the benefits of AI and managing its impact will require a holistic approach:  one that both supports the development of the technology, and at the same time looks to channel it to the broadest possible good.

Taking this kind of responsible, foresightful approach is, as the report says - I quote “not only an ethical matter, but also good business sense.”  Recent examples in social media show why: if a technology is not developed responsibly and with foresight, there will be a backlash, and that precious trust with users on which businesses depend will be broken.

So I was very happy to see that this report notes in its very first recommendation that public trust is essential if AI is to fulfil its potential.  But building trust isn’t a PR exercise. Trust is earned.

So while there are many excellent ideas here for expanding the pipeline of talent that industry needs, or for supporting Small and Medium-sized Enterprises in harnessing AI, what makes this report so timely is the measures it suggests for ensuring AI is developed and deployed in the UK in a way that would deserve that public trust.

And I particularly want to highlight the report’s commitment to engaging actively and openly with the international community.  It rightly highlights that we should not see the development of AI as a competitive, winner-takes-all race. If we do, that will only prove to be a race to the bottom:  it would lead to cutting corners on safety, closing down inclusive debate and consultation — and ultimately, therefore to undermining what is necessary for public trust.

The report offers an alternative vision: it recommends that the Government and other stakeholders work towards an AI code, which could be applied across public and private organisations.  

And it recognises that this is an area in which the UK can show real, global leadership.  We are leading the way in academic research into ethical AI; we are fortunate to have leading companies that take social responsibility seriously;  and this Government has been the first to announce an official Centre for Data Ethics and Innovation.

And I hope that that Centre will pick up on this Committee’s call to convene a global summit which can set the agenda for a truly international common framework for the ethical development and deployment of AI.


But behind all these many excellent ideas, I believe there is something deeper.  I’ve looked carefully at the report’s 74 recommendations in recent days. And it seems to me that underpinning them are three core values.  

The first I’ve already mentioned: responsibility.  

The tech entrepreneur mantra of the last decade was move fast and break things.  But some things are too important to be broken: like democracy, or equality, or social cohesion.  The only kind of innovation that will bring us closer to the society we want, and the only kind a government should support, is therefore responsible innovation.

I’ve also alluded to the second value that permeates this report: openness.  An openness to innovation, of course, which is crucial. But also an openness to the wider world:  whether through bringing in workers with the skills that we need, or through working with other countries to establish common standards.

Third, and most important of all, this report emphasises the value of diversity.  We all know that the development of AI, and the debates around it, are disproportionately dominated by one particular gender, one particular ethnicity, and one particular class and cultural worldview.  Now this would be a problem in any sector. But for a technology as impactful as AI, the consequences for fairness and equality are potentially calamitous. As this report recognises, this must be urgently addressed.

We are fortunate that these values — responsibility, openness and diversity — are British values.  Of course, we often don’t manage to live up to them. But we will need to now.

They are the values that will enable us to lead the global debate on ethical AI.  

They are the values that will be required to underpin public trust in this technology that has so much potential for good.  

And so I’m delighted to say that, in my view, they are the values that underpin this report, which I wholeheartedly commend to you.”

Next article

Leverhulme CFI welcomes creation of Nuffield Foundation's new Ada Lovelace Institute