In today’s business, decision-making is heavily dependent on algorithms. Algorithms may originate from operational research, machine learning, but also decision theory. Regardless of their origin, the decision-maker may create unwanted disparities regarding race, gender, or religion. These disparities may further lead to legal consequences. To mitigate unwanted consequences one must adjust either algorithms or decisions. In this paper, we adjust the popular decision-making method TOPSIS to produce utility scores without disparate impact. This is done is by introducing “fairness weight“ that is used for the calculation of the utility function of TOPSIS method. Fairness weight should provide the smallest possible intervention needed for a decision without disparate impact. The effectiveness of the proposed solution is shown on the synthetic dataset, as well as on the exemplar dataset regarding criminal justice.