The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Supporting human autonomy in AI systems: A framework for ethical enquiry

Chapter in academic handbook by Rafael Calvo, Dorian Peters, Karina Vold, Richard M.Ryan

Supporting human autonomy in AI systems: A framework for ethical enquiry, Rafael A. Calvo, Dorian Peters, Karina Vold, Richard M. Ryan. In Christopher Burr & Luciano Floridi (eds.), Ethics of Digital Well-being: A Multidisciplinary Approach. (forthcoming).

Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for the following three claims: 1) There are autonomy-related consequences to algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; 2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and 3) Autonomy-support must be analysed from at least six spheres of experience in order to approriately capture contradictory and downstream effects.

Download Chapter in academic handbook