Please use this identifier to cite or link to this item: https://rfos.fon.bg.ac.rs/handle/123456789/2408
Full metadata record
DC FieldValueLanguage
dc.creatorRadovanović, Sandro
dc.creatorDelibašić, Boris
dc.creatorSuknović, Milija
dc.date.accessioned2023-05-12T11:45:53Z-
dc.date.available2023-05-12T11:45:53Z-
dc.date.issued2022
dc.identifier.issn1877-0509
dc.identifier.urihttps://rfos.fon.bg.ac.rs/handle/123456789/2408-
dc.description.abstractUsing machine learning algorithms in social environments and systems requires stricter and more detailed control. More specifically, the cost of error in such systems is much higher. Therefore, one should ensure that important decisions, such as whether to convict a person or not based on the previous criminal record, are by the legal requirements and not biased toward a group of people. One can find many many papers in the literature aimed at mitigating or eliminating unwanted bias in machine learning models. A significant part of these efforts add fairness constraint to the mathematical model or adds a regularization term to the loss function. In this paper, we show that optimizing the loss function given the fairness constraint or regularization for unfairness can surprisingly yield unfair solutions. This is due to the linear relaxation of the fairness function. By analyzing the gap between the true value of fairness and the one obtained using linear relaxation, we found that the gap can be as high as around 21% for the COMPAS dataset, and around 35% for the Adult dataset. In addition, we show that the fairness gap is consistent regardless of the strength of the fairness constraint or regularization.en
dc.publisherElsevier B.V.
dc.relationThis research was funded by Office of Naval Research grant number ONR N62909-19-1-2008, Project title: ”Aggregating computational algorithms and human decision-making preferences in multi-agent settings”.
dc.rightsopenAccess
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.sourceProcedia Computer Science
dc.subjectMachine Learningen
dc.subjectLogistic Regressionen
dc.subjectFairnessen
dc.subjectFairness Gapen
dc.subjectDisparate Impacten
dc.subjectAlgorithmic decision-makingen
dc.titleDo we Reach Desired Disparate Impact with In-Processing Fairness Techniques?en
dc.typeconferenceObject
dc.rights.licenseBY
dc.citation.epage264
dc.citation.issueC
dc.citation.other214(C): 257-264
dc.citation.spage257
dc.citation.volume214
dc.identifier.doi10.1016/j.procs.2022.11.173
dc.identifier.rcubconv_3756
dc.identifier.scopus2-s2.0-85146119712
dc.type.versionpublishedVersion
item.cerifentitytypePublications-
item.fulltextNo Fulltext-
item.grantfulltextnone-
item.openairetypeconferenceObject-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
Appears in Collections:Radovi istraživača / Researchers’ publications
Show simple item record

SCOPUSTM   
Citations

1
checked on Nov 17, 2025

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons