Please use this identifier to cite or link to this item: https://rfos.fon.bg.ac.rs/handle/123456789/2370
Full metadata record
DC FieldValueLanguage
dc.creatorPetrović, Andrija
dc.creatorNikolić, Mladen
dc.creatorRadovanović, Sandro
dc.creatorDelibašić, Boris
dc.creatorJovanović, Miloš
dc.date.accessioned2023-05-12T11:43:59Z-
dc.date.available2023-05-12T11:43:59Z-
dc.date.issued2022
dc.identifier.issn0925-2312
dc.identifier.urihttps://rfos.fon.bg.ac.rs/handle/123456789/2370-
dc.description.abstractWith growing awareness of societal impact of artificial intelligence, fairness has become an important aspect of machine learning algorithms. The issue is that human biases towards certain groups of popu-lation, defined by sensitive features like race and gender, are introduced to the training data through data collection and labeling. Two important directions of fairness ensuring research have focused on (i) instance weighting in order to decrease the impact of more biased instances and (ii) adversarial training in order to construct data representations informative of the target variable, but uninformative of the sensitive attributes. In this paper we propose a Fair Adversarial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions. Merging the two paradigms, it inherits desirable properties from both interpretability of reweighting and end-to-end trainability of adversarial training. We propose four different variants of the method and, among other things, demonstrate how the method can be cast in a fully probabilistic framework. Additionally, theoretical analysis of FAIR models' properties is provided. We compare FAIR models to ten other related and state-of-the-art models and demonstrate that FAIR is able to achieve a better trade-off between accuracy and unfairness. To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide inter-pretable information about fairness of individual instances.en
dc.publisherElsevier, Amsterdam
dc.relationONR/ONR Global [N62909-19-1-2008]
dc.relationcompany Saga New Frontier Group Belgrade
dc.relationinfo:eu-repo/grantAgreement/MESTD/Basic Research (BR or ON)/174021/RS//
dc.relationinfo:eu-repo/grantAgreement/MESTD/Technological Development (TD or TR)/35004/RS//
dc.relationinfo:eu-repo/grantAgreement/MESTD/Integrated and Interdisciplinary Research (IIR or III)/41008/RS//
dc.rightsopenAccess
dc.sourceNeurocomputing
dc.subjectInstance reweightingen
dc.subjectFairnessen
dc.subjectDeep learningen
dc.subjectClassificationen
dc.subjectAdversarial trainingen
dc.titleFAIR: Fair adversarial instance re-weightingen
dc.typearticle
dc.rights.licenseARR
dc.citation.epage37
dc.citation.other476: 14-37
dc.citation.rankM21~
dc.citation.spage14
dc.citation.volume476
dc.identifier.doi10.1016/j.neucom.2021.12.082
dc.identifier.fulltexthttp://prototype2.rcub.bg.ac.rs/bitstream/id/875/2366.pdf
dc.identifier.rcubconv_2621
dc.identifier.scopus2-s2.0-85122232315
dc.identifier.wos000761652400002
dc.type.versionpublishedVersion
item.cerifentitytypePublications-
item.fulltextWith Fulltext-
item.grantfulltextopen-
item.openairetypearticle-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
Appears in Collections:Radovi istraživača / Researchers’ publications
Files in This Item:
File Description SizeFormat 
2366.pdf2.57 MBAdobe PDFThumbnail
View/Open
Show simple item record

SCOPUSTM   
Citations

28
checked on Nov 17, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.