The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Medical Black Boxes and AI Explainability

While modern AI systems can display impressive accuracy in medical tasks, it can be difficult to fully explain how they work or why they make individual decisions. The development of techniques for making systems “intelligible” or “explainable” is a growing focus of research within machine learning. However, there is still no consensus on what “understanding” or “explanation” amounts to in this context.

Moreover, many of the processes and decisions already involved in medicine are not fully explainable. Randomised controlled trials can provide evidence for the efficaciousness of a drug even if researchers do not fully understand the underlying physiological mechanism. Physicians can trust the expert judgment of their radiologist colleague, even if the radiologist cannot fully explain how they are able to recognise tumours as malignant. Patients do not need a detailed understanding of a proposed treatment, or of their physicians’ decision-making processes, to trust and consent to it.

Given the ubiquity of medical “black boxes”, it is unclear whether a lack of AI explainability poses any novel problems in the medical domain. This project leverages theoretical tools from philosophy of science, philosophy of medicine, and medical ethics in order to enhance our understanding of the nature and importance of AI explainability in medical contexts.

This project is supported by the Wellcome Trust.

Wellcome Trust

Next project

Science of Intelligence and its Values