Do we Reach Desired Disparate Impact with In-Processing Fairness Techniques?

Abstract

Using machine learning algorithms in social environments and systems requires stricter and more detailed control. More specifically, the cost of error in such systems is much higher. Therefore, one should ensure that important decisions, such as whether to convict a person or not based on the previous criminal record, are by the legal requirements and not biased toward a group of people. One can find many many papers in the literature aimed at mitigating or eliminating unwanted bias in machine learning models. A significant part of these efforts add fairness constraint to the mathematical model or adds a regularization term to the loss function. In this paper, we show that optimizing the loss function given the fairness constraint or regularization for unfairness can surprisingly yield unfair solutions. This is due to the linear relaxation of the fairness function. By analyzing the gap between the true value of fairness and the one obtained using linear relaxation, we found that the gap can be as high as around 21% for the COMPAS dataset, and around 35% for the Adult dataset. In addition, we show that the fairness gap is consistent regardless of the strength of the fairness constraint or regularization.

Publication
In Procedia Computer Science
Sandro Radovanović
Sandro Radovanović
Assistant Professor at University of Belgrade

My research interests include machine learning, development and design of decision support systems, decision theory, and fairness and justice concepts in algorithmic decision making.