Please use this identifier to cite or link to this item:
https://rfos.fon.bg.ac.rs/handle/123456789/2130Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.creator | Radovanović, Sandro | |
| dc.creator | Petrović, A. | |
| dc.creator | Delibašić, Boris | |
| dc.creator | Suknović, Milija | |
| dc.date.accessioned | 2023-05-12T11:31:55Z | - |
| dc.date.available | 2023-05-12T11:31:55Z | - |
| dc.date.issued | 2020 | |
| dc.identifier.uri | https://rfos.fon.bg.ac.rs/handle/123456789/2130 | - |
| dc.description.abstract | Machine learning has been subject to discussion from the legal and ethical points of view in recent years. Automation of the decision-making process can lead to unethical acts with legal consequences. There are examples where the decision made by machine learning systems was unfairly biased toward some group of people. This is mainly because data used for model training were biased and thus developed a predictive model inherited that bias. Therefore, the process of learning a predictive model must be aware and account for the possible bias in the data. In this paper, we propose a modification of the logistic regression algorithm that adds one known and one novel fairness constraints into the process of model learning, thus forcing the predictive model not to create disparate impact and allow equal opportunity for every subpopulation. We demonstrate our model on real-world problems and show that a small reduction in predictive performance can yield a high improvement in disparate impact and equality of opportunity. | en |
| dc.publisher | Institute of Electrical and Electronics Engineers Inc. | |
| dc.relation | This work was supported in part by the ONR/ONR Global under Grant N62909-19-1-2008. | |
| dc.rights | restrictedAccess | |
| dc.source | INISTA 2020 - 2020 International Conference on INnovations in Intelligent SysTems and Applications, Proceedings | |
| dc.subject | logistic regression | en |
| dc.subject | equality of opportunity | en |
| dc.subject | disparate impact | en |
| dc.subject | algorithmic fairness | en |
| dc.title | Enforcing fairness in logistic regression algorithm | en |
| dc.type | conferenceObject | |
| dc.rights.license | ARR | |
| dc.identifier.doi | 10.1109/INISTA49547.2020.9194676 | |
| dc.identifier.rcub | conv_3636 | |
| dc.identifier.scopus | 2-s2.0-85092006276 | |
| dc.type.version | publishedVersion | |
| item.cerifentitytype | Publications | - |
| item.fulltext | With Fulltext | - |
| item.grantfulltext | restricted | - |
| item.openairetype | conferenceObject | - |
| item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
| Appears in Collections: | Radovi istraživača / Researchers’ publications | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 2126.pdf Restricted Access | 289 kB | Adobe PDF | View/Open Request a copy |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.