Please use this identifier to cite or link to this item: https://rfos.fon.bg.ac.rs/handle/123456789/2967
Full metadata record
DC FieldValueLanguage
dc.creatorLukić, Matijaen_US
dc.creatorPoledica, Anaen_US
dc.creatorMilošević, Pavleen_US
dc.date.accessioned2025-12-04T10:10:30Z-
dc.date.available2025-12-04T10:10:30Z-
dc.date.issued2025-
dc.identifier.urihttps://rfos.fon.bg.ac.rs/handle/123456789/2967-
dc.description.abstractAs generative AI systems grow in adoption and complexity, they introduce novel security, safety and alignment risks that challenge traditional evaluation and defense paradigms. To address these, we focus on a structured five-phase red teaming workflow consisting of reconnaissance, enumeration, exploitation, impact realization and persistence specifically tailored to GenAI’s unique threat landscape. Through real- world case studies and examples, we illustrate how adversaries exploit model vulnerabilities, bypass alignment mechanisms and cause persistent harm. We also identify emerging GenAI security tools and map each red teaming phase to actionable mitigations that support safe deployment. Our goal is to connect AI safety theory with practical adversarial resilience for researchers, developers and policymakers.en_US
dc.language.isoenen_US
dc.publisherUniverzitet u Beogradu – Fakultet organizacionih naukaen_US
dc.rightsopenAccessen_US
dc.source52nd International Symposium on Operational Research (SYM-OP-IS 2025) Symposium Proceedingsen_US
dc.titleRed teaming generative AI applications: threat modeling and mitigation strategiesen_US
dc.typeconferenceObjecten_US
dc.citation.epage47en_US
dc.citation.spage42en_US
dc.identifier.doi10.5281/zenodo.17533784-
dc.type.versionpublishedVersionen_US
item.fulltextNo Fulltext-
item.openairetypeconferenceObject-
item.grantfulltextnone-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.languageiso639-1en-
Appears in Collections:Radovi istraživača / Researchers’ publications
Show simple item record

Page view(s)

24
checked on Dec 14, 2025

Google ScholarTM

Check

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.