The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Why Value Judgements Should Not Be Automated

Evidence Submission by Rune Nyrup, Jess Whittlestone, Stephen Cave

Why Value Judgements Should Not Be Automated. https://doi.org/10.17863/CAM.41552

Submitted as evidence for the Committee on Standards in Public Life’s review into artificial intelligence and its impact on standards across the public sector.

Abstract
AI technologies are already being used for a number of purposes in public services, including to automate (parts of) decision processes and to make recommendations and predictions in support of human decisions. Increasing application of AI in public services therefore has the potential to impact several of the Seven Principles of Public Life, presenting new challenges for public servants in upholding those values. We believe AI is particularly likely to impact the principles of Objectivity, Openness, Accountability and Leadership. Algorithmic bias has the potential to threaten the objectivity of public sector decisions, while several forms of opacity in AI systems raise challenges for openness in public services; and, in turn, this impacts the ability of public servants to be accountable and exercise proper leadership.

Download Evidence Submission