
In this stream of work my co-authors and I focus on productive social values, how they can be effectively integrated into machine learning and AI systems and method, and how can ML augment humans' fairness in decision-making. Given humans exhibit bias, but are often unaware of their biases, we consider new ML frameworks that augment humans and making decision more fair.
Augmented Fairness, with Tong Wang
In this research Tong Wang and I focus on developing ML to augment human fairness. By contrast to most prior work which focuses on the (important!) problem of algorithmic fairness, we consider settings where humans decision makers exhibit bias, and we propose a machine learning framework to augment humans so that the final decisions have a superior fairness-accuracy tradeoff.
Publications:
Augmented Fairness: An Interpretable Model Augmenting Decision-Makers' Fairness", with Tong Wang, Best Paper Award, INFORMS Workshop on Data Science, 2020.
NeurIPS2020, Algorithmic Fairness through the Lens of Causality and Interpretability