Please use this identifier to cite or link to this item: https://rfos.fon.bg.ac.rs/handle/123456789/2408
Title: Do we Reach Desired Disparate Impact with In-Processing Fairness Techniques?
Authors: Radovanović, Sandro 
Delibašić, Boris 
Suknović, Milija
Keywords: Machine Learning;Logistic Regression;Fairness;Fairness Gap;Disparate Impact;Algorithmic decision-making
Issue Date: 2022
Publisher: Elsevier B.V.
Abstract: Using machine learning algorithms in social environments and systems requires stricter and more detailed control. More specifically, the cost of error in such systems is much higher. Therefore, one should ensure that important decisions, such as whether to convict a person or not based on the previous criminal record, are by the legal requirements and not biased toward a group of people. One can find many many papers in the literature aimed at mitigating or eliminating unwanted bias in machine learning models. A significant part of these efforts add fairness constraint to the mathematical model or adds a regularization term to the loss function. In this paper, we show that optimizing the loss function given the fairness constraint or regularization for unfairness can surprisingly yield unfair solutions. This is due to the linear relaxation of the fairness function. By analyzing the gap between the true value of fairness and the one obtained using linear relaxation, we found that the gap can be as high as around 21% for the COMPAS dataset, and around 35% for the Adult dataset. In addition, we show that the fairness gap is consistent regardless of the strength of the fairness constraint or regularization.
URI: https://rfos.fon.bg.ac.rs/handle/123456789/2408
ISSN: 1877-0509
Appears in Collections:Radovi istraživača / Researchers’ publications

Show full item record

SCOPUSTM   
Citations

1
checked on Nov 17, 2025

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons