Making hospital readmission classifier fair – What is the cost?

Abstract

Creating predictive models using machine learning algorithms is often understood as a job where Data Scientist provides data to the algorithm without much intervention. With the rise of ethics in machine learning, predictive models need to be made fair. In this paper, we inspect the effects of pre-processing, in- processing and post-processing techniques for making predictive models fair. These techniques are applied to the hospital readmission prediction problem, where gender is considered as a sensitive attribute. The goal of the paper is to check whether unwanted discrimination between female and male in the logistic regression model exists and if exists to alleviate this problem making classifier fair. We employed logistic regression model which obtained AUC = 0.7959 and AUPRC = 0.5263. We have shown that reweighting strategy is a good trade-off between fairness and predictive performance. Namely, fairness is greatly improved, without much sacrificing predictive performance. We also show that adversarial debiasing is a good technique which combines predictive performance and fairness, and Equality of Odds technique optimizes Theil index.

Publication
In Central European Conference on Information and Intelligence Systems 2019
Sandro Radovanović
Sandro Radovanović
Assistant Professor at University of Belgrade

My research interests include machine learning, development and design of decision support systems, decision theory, and fairness and justice concepts in algorithmic decision making.