Mateja Jamnik

Associate Fellow


Professor Mateja Jamnik is a Reader in Artificial Intelligence at the Department of Computer Science and Technology of the University of Cambridge, UK. She has recently served as a Specialist Adviser to the House of Lords Select Committee on Artificial Intelligence, helping the UK government in policy direction, priority and focus in relation to the impact of AI on society. Previously, she held an EPSRC Advanced Research Fellowship.

Mateja’s research is developing AI techniques for human-like computing. Her work focusses on how people solve problems using informal techniques like diagrams, and she then computationally models this type of reasoning on computers to enable machines to reason in a similar way to humans. She is essentially trying to humanise computer thinking. Recently she started to apply AI and reasoning techniques to medical data to advance personalised cancer medicine.

Her PhD work at the University of Edinburgh focussed on particular forms of mathematical reasoning and was published by Stanford University’s CSLI Press. At the start of the millennium, she was one of the founders of a new interdisciplinary research area and conference series “Diagrams” on the theory and application of diagrams. Mateja’s research bridges theoretical computer science (such as automated reasoning) and artificial intelligence, and has been supported by the UK Engineering & Physical Sciences Research Council (EPSRC), the Leverhulme Trust, the Mark Foundation, and the European Research Council.

Mateja is passionate about bringing science closer to the general public and engages frequently with the media and in public science events. She is an active supporter of women scientists and in 2003 founded a national network, women@CL, for women in computing research. In recognition of these contributions, Mateja was awarded the Athena Prize in 2016 by the Royal Society.

Mateja’s website

Back to people


You shouldn’t trust me: Learning models which conceal unfairness from multiple explanation methods

You shouldn’t trust me: Learning models which conceal unfairness from multiple explanation methods. European Conference on Artificial Intelligence (ECAI), 2020. Transparency of algorithmic systems has been discussed as a way for end-users and regulators to develop appropriate trust in machine learning models. One popular approach, LIME (Ribeiro, Singh, and Guestrin 2016), even suggests that model […]