Generalizing machine learning models from clinical free text.

TitleGeneralizing machine learning models from clinical free text.
Publication TypeJournal Article
Year of Publication2025
AuthorsPandian B, Vandervest J, Mentz G, Varghese J, Steadman SD, Kheterpal S, Makar M, Vydiswaran VGVinod, Burns ML
JournalSci Rep
Volume15
Issue1
Pagination31668
Date Published2025 Aug 28
ISSN2045-2322
KeywordsCurrent Procedural Terminology, Humans, Machine Learning, Neural Networks, Computer
Abstract

To assess strategies for enhancing the generalizability of healthcare artificial intelligence models, we analyzed the impact of preprocessing approaches applied to medical free text, compared single- versus multiple-institution data models, and evaluated data divergence metrics. From 1,607,393 procedures across 44 U.S. institutions, deep neural network models were created to classify anesthesiology Current Procedural Terminology codes from medical free text. Three levels of text preprocessing were analyzed from minimal to automated (cSpell) with comprehensive physician review. Kullback-Leibler Divergence and k-medoid clustering were used to predict single- vs multiple-institutional model performances. Single-institution models showed a mean accuracy of 92.5% [2.8% SD] and 0.923 [0.029] F1 on internal data but generalized poorly on external data (- 22.4% [7.0%]; - 0.223 [0.081]). Free text preprocessing minimally altered performance (+ 0.51% [2.23]; + 0.004 [0.020]). An all-institution model performed worse on internal data (-4.88% [2.43%]; - 0.045 [0.020]), but improved generalizability to external data (+ 17.1% [8.7%]; + 0.182 [0.073]). Compared to vocabulary overlap and Jaccard similarity, Kullback-Leibler Divergence correlated with model performance (R2 of 0.41 vs 0.16 vs 0.08, respectively) and was successful clustering institutions and identifying outlier data. Overall, pre-processing medical free text showed limited utility improving generalization of machine learning models, single institution models performed best but generalized poorly, while combined data models improved generalization but never achieved performance of single-institutional models. Kullback-Leibler Divergence provided valuable insight as a reliable heuristic to evaluate generalizability. These results have important implications in developing broad use artificial intelligence healthcare applications, providing valuable insight into their development and evaluations.

DOI10.1038/s41598-025-17197-6
Alternate JournalSci Rep
PubMed ID40866580
PubMed Central IDPMC12391454
Grant List2153083 / / National Science Foundation /