Umang Bhatt

Student Fellow


Umang Bhatt is an incoming Assistant Professor/Faculty Fellow at the Center for Data Science at New York University and a Research Associate in Safe and Ethical AI at the Alan Turing Institute. He was previously a PhD Candidate in the Machine Learning Group at the University of Cambridge. He was also a Student Fellow at the Leverhulme CFI from 2019 to 2023. His research lies in human-AI collaboration, AI governance, and algorithmic transparency. Umang builds tools for routing decision-makers to appropriate forms of decision support and for capturing how AI systems are used in decision-making contexts all over the world. His work has been supported by a JP Morgan PhD Fellowship and a Mozilla Fellowship. Previously, he was a Research Fellow at the Partnership on AI, a Fellow at Harvard’s Center for Research on Computation and Society, and an Advisor to the Responsible AI Institute. Umang received his MS and BS in Electrical and Computer Engineering from Carnegie Mellon University.

Back to people


Counterfactual Accuracies of Alternative Models

Counterfactual Accuracies of Alternative Models. ML-IRL: Machine Learning in Real Life Workshop at ICLR 2020. Abstract: Typically we fit a model by optimizing performance on training data. Here we focus on the case of a binary classifier that predicts ‘yes’ or ‘no’ for any given test point. We explore a notion of confidence in a particular prediction […]

Evaluating and Aggregating Feature-based Model Explanations

Evaluating and Aggregating Feature-based Model Explanations. International Joint Conference on Artificial Intelligence (IJCAI-PRICAI), 2020. A feature-based model explanation denotes how much each input feature contributes to a model’s output for a given data point. As the number of proposed explanation functions grows, we lack quantitative evaluation criteria to help practitioners know when to use which explanation […]

You shouldn’t trust me: Learning models which conceal unfairness from multiple explanation methods

You shouldn’t trust me: Learning models which conceal unfairness from multiple explanation methods. European Conference on Artificial Intelligence (ECAI), 2020. Transparency of algorithmic systems has been discussed as a way for end-users and regulators to develop appropriate trust in machine learning models. One popular approach, LIME (Ribeiro, Singh, and Guestrin 2016), even suggests that model […]

Explainable Machine Learning in Deployment

Explainable Machine Learning in Deployment, NeurIPS Workshop on Human-centric Machine Learning, 2019. FAT* ’20: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency January 2020 Pages 648–657 Explainable machine learning seeks to provide various stakeholders with insights into model behavior via feature importance scores, counterfactual explanations, and influential samples, among other techniques. Recent advances […]


Umang Bhatt

Trust and Transparency

This project is developing processes to ensure that AI systems are transparent, reliable and trustworthy. As AI systems are widely deployed in real-world settings, it is critical for us to understand the mechanisms by which they take decisions, when they can be trusted to perform well, and when they may fail. This project addresses these […]