Eliminating Disparate Impact in MCDM: The case of TOPSIS

Abstract

In today’s business, decision-making is heavily dependent on algorithms. Algorithms may originate from operational research, machine learning, but also decision theory. Regardless of their origin, the decision-maker may create unwanted disparities regarding race, gender, or religion. These disparities may further lead to legal consequences. To mitigate unwanted consequences one must adjust either algorithms or decisions. In this paper, we adjust the popular decision-making method TOPSIS to produce utility scores without disparate impact. This is done is by introducing “fairness weight“ that is used for the calculation of the utility function of TOPSIS method. Fairness weight should provide the smallest possible intervention needed for a decision without disparate impact. The effectiveness of the proposed solution is shown on the synthetic dataset, as well as on the exemplar dataset regarding criminal justice.

Publication
In Central European Conference on Information and Intelligence Systems 2021
Sandro Radovanović
Sandro Radovanović
Assistant Professor at University of Belgrade

My research interests include machine learning, development and design of decision support systems, decision theory, and fairness and justice concepts in algorithmic decision making.