Enforcing fairness in logistic regression algorithm


Machine learning has been subject to discussion from the legal and ethical points of view in recent years. Automation of the decision-making process can lead to unethical acts with legal consequences. There are examples where the decision made by machine learning systems was unfairly biased toward some group of people. This is mainly because data used for model training were biased and thus developed a predictive model inherited that bias. Therefore, the process of learning a predictive model must be aware and account for the possible bias in the data. In this paper, we propose a modification of the logistic regression algorithm that adds one known and one novel fairness constraints into the process of model learning, thus forcing the predictive model not to create disparate impact and allow equal opportunity for every subpopulation. We demonstrate our model on real-world problems and show that a small reduction in predictive performance can yield a high improvement in disparate impact and equality of opportunity.

In International Conference on INnovations in Intelligent SysTems and Applications 2020
Sandro Radovanović
Sandro Radovanović
Assistant Professor at University of Belgrade

My research interests include machine learning, development and design of decision support systems, decision theory, and fairness and justice concepts in algorithmic decision making.