Please use this identifier to cite or link to this item: https://rfos.fon.bg.ac.rs/handle/123456789/2130
Title: Enforcing fairness in logistic regression algorithm
Authors: Radovanović, Sandro 
Petrović, A.
Delibašić, Boris 
Suknović, Milija
Keywords: logistic regression;equality of opportunity;disparate impact;algorithmic fairness
Issue Date: 2020
Publisher: Institute of Electrical and Electronics Engineers Inc.
Abstract: Machine learning has been subject to discussion from the legal and ethical points of view in recent years. Automation of the decision-making process can lead to unethical acts with legal consequences. There are examples where the decision made by machine learning systems was unfairly biased toward some group of people. This is mainly because data used for model training were biased and thus developed a predictive model inherited that bias. Therefore, the process of learning a predictive model must be aware and account for the possible bias in the data. In this paper, we propose a modification of the logistic regression algorithm that adds one known and one novel fairness constraints into the process of model learning, thus forcing the predictive model not to create disparate impact and allow equal opportunity for every subpopulation. We demonstrate our model on real-world problems and show that a small reduction in predictive performance can yield a high improvement in disparate impact and equality of opportunity.
URI: https://rfos.fon.bg.ac.rs/handle/123456789/2130
Appears in Collections:Radovi istraživača / Researchers’ publications

Files in This Item:
File Description SizeFormat 
2126.pdf
  Restricted Access
289 kBAdobe PDFView/Open    Request a copy
Show full item record

SCOPUSTM   
Citations

13
checked on Nov 17, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.