Please use this identifier to cite or link to this item: https://rfos.fon.bg.ac.rs/handle/123456789/2226
Title: Text mining to support abstract screening for knowledge syntheses: a semi-automated workflow
Authors: Pham, Ba
Jovanović, Jelena 
Bagheri, Ebrahim
Antony, Jesmin
Ashoor, Huda
Nguyen, Tam T.
Rios, Patricia
Robson, Reid
Thomas, Sonia M.
Watt, Jennifer
Straus, Sharon E.
Tricco, Andrea C.
Keywords: Text mining;Systematic review;Scoping review;Natural language processing;Machine learning;Classification model;Automation;Abstract screening
Issue Date: 2021
Publisher: BMC, London
Abstract: Background: Current text mining tools supporting abstract screening in systematic reviews are not widely used, in part because they lack sensitivity and precision. We set out to develop an accessible, semi-automated "workflow" to conduct abstract screening for systematic reviews and other knowledge synthesis methods. Methods: We adopt widely recommended text-mining and machine-learning methods to (1) process title-abstracts into numerical training data; and (2) train a classification model to predict eligible abstracts. The predicted abstracts are screened by human reviewers for ("true") eligibility, and the newly eligible abstracts are used to identify similar abstracts, using near-neighbor methods, which are also screened. These abstracts, as well as their eligibility results, are used to update the classification model, and the above steps are iterated until no new eligible abstracts are identified. The workflow was implemented in R and evaluated using a systematic review of insulin formulations for type-1 diabetes (14,314 abstracts) and a scoping review of knowledge-synthesis methods (17,200 abstracts). Workflow performance was evaluated against the recommended practice of screening abstracts by 2 reviewers, independently. Standard measures were examined: sensitivity (inclusion of all truly eligible abstracts), specificity (exclusion of all truly ineligible abstracts), precision (inclusion of all truly eligible abstracts among all abstracts screened as eligible), F1-score (harmonic average of sensitivity and precision), and accuracy (correctly predicted eligible or ineligible abstracts). Workload reduction was measured as the hours the workflow saved, given only a subset of abstracts needed human screening. Results: With respect to the systematic and scoping reviews respectively, the workflow attained 88%/89% sensitivity, 99%/99% specificity, 71%/72% precision, an F1-score of 7996/79%, 98%/97% accuracy, 63%/55% workload reduction, with 12%/11% fewer abstracts for full-text retrieval and screening, and 0%/1.5% missed studies in the completed reviews. Conclusion: The workflow was a sensitive, precise, and efficient alternative to the recommended practice of screening abstracts with 2 reviewers. All eligible studies were identified in the first case, while 6 studies (1.5%) were missed in the second that would likely not impact the review's conclusions. We have described the workflow in language accessible to reviewers with limited exposure to natural language processing and machine learning, and have made the code available to reviewers.
URI: https://rfos.fon.bg.ac.rs/handle/123456789/2226
ISSN: 2046-4053
Appears in Collections:Radovi istraživača / Researchers’ publications

Files in This Item:
File Description SizeFormat 
2222.pdf1.38 MBAdobe PDFThumbnail
View/Open
Show full item record

SCOPUSTM   
Citations

32
checked on Nov 17, 2025

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons