The LCFI website uses cookies only for anonymised website statistics and for ensuring our security, never for tracking or identifying you individually. To find out more, and to find out how we protect your personal information, please read our privacy policy.

Yaqub Chaudhary

Visiting Scholar

Biography

Yaqub Chaudhary is a Visiting Scholar at the Leverhulme Centre for the Future of Intelligence. He is an interdisciplinary scholar and the central themes of his research include the philosophy of AI, ML and digital computation, and the use of AI/ML in science and the humanities. His wider research interests span philosophy of science, philosophy of technology, philosophy of mind, history, religious studies, the social and political sciences, and other fields. He completed his doctoral studies in Physics at Imperial College London in the field of Plastic Electronics. He later undertook a Fellowship at Cambridge Muslim College where he began philosophical, theological and metaphysical inquiries into the nature of AI and digital technologies. He has written on Islam and AI, the use of AI in the ecological sciences and climate change research, the metaphysics of AI, and on emerging technologies such as the philosophy of augmented reality. 

Yaqub is also an AI/ML practitioner and has recently worked on novel methods for grounding the generative capabilities of large language models (LLMs) as an AI/ML & Data Science Fellow at Faculty Science in London, and is currently independently researching and prototyping in the emerging space of privacy preserving AI/ML and private computation.

At LCFI, he is undertaking research in two areas within the History of AI stream. The first project aims to widen the scope of discourse in the critical reception of LLMs and their use in areas such as social science research and policy making. The aim of this research is to develop an account of the historical, political, epistemological, and philosophical contexts underlying the development and deployment of LLMs in order to contribute insights to the ongoing discussion on the adoption of these technologies in socially and politically delicate areas. The second project is an historical and epistemological study of the conceptual advances between 1990-2010 that were realised in hardware in the form of graphics processing units (GPUs), and which led to the rapid development of contemporary AI systems in the early 2010s. This project will aim to contribute toward understanding the material and conceptual preconditions that underlie the accelerating development and adoption of AI, deep learning, machine learning, and computational methods in science and society.

Back to people