Back to 2025 Abstracts
Imaging Based Surgical Site Infection Detection Using Artificial Intelligence
*
Hala Muaddi1, *Frank Lee
1, *Ashok Choudhary
2, *Stephanie Anderson
2, *Elizabeth Habermann
2, *David Etzioni
1, Sarah Mclaughlin
1, *Michael Kendrick
1, *Hojjat Salehinejad
2, *Cornelius Thiels
11Surgery, Mayo Clinic, Rochester, MN; 2Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN
Background:Early identification of wound infections is essential for reducing postoperative morbidity. The rise of outpatient surgeries, remote monitoring, and patient-submitted wound images via online portals has placed a growing administrative workload on healthcare providers.
Objective:To develop an AI-based pipeline to assess and triage patient-submitted wound photos after surgery.
Methods:Patients over 18 years old who underwent surgery at nine academic and tertiary centers across North America, and were captured in the National Surgical Quality Improvement Program (NSQIP) were included. Images submitted to the patient portal within 30 days of surgery were reviewed and categorized by two surgeons for presence of surgical wounds. Superficial and deep surgical site infection outcomes were obtained from NSQIP data. Two models were developed leveraging four pretrained architectures: Vision Transformer, ResNet50, MobileNetV4, and U-Net. First model was designed to detect surgical wounds using all images submitted through the portal and the second detected surgical site infections using images with confirmed surgical wounds. The dataset was split using 10-fold cross-validation (80:20 split). Upsampling and data augmentation (random flips and grayscale) was utilized to address class imbalance. Early stopping was applied to reduce overfitting. Performance was measured and compared across the four approaches using accuracy, precision, recall, and F1-score across architectures. An end-to-end pipeline (combining incision and infection detection) was developed using the natural distribution of the data.
Results:Among 6,060 patients and 6,199 surgical procedures, the median age of patients was 54 years (IQR 40-65), and 61.4% (n=3,805) were female. Most patients (92.5%, n=5,731) identified as white. The most common surgery type was orthopedic surgery (23.9%) followed by general surgery (19.8%) and plastic surgery (15.1%). 20,902 images were submitted. 6.2% (n=386) patients had a confirmed surgical site infection.
For incision detection, Vision Transformer performed best, with an accuracy of 0.93±0.01, precision of 0.92±0.01, recall of 0.94±0.01, F1-score of 0.93±0.01, and AUC of 0.98 (Figure). For infection detection, Vision Transformer also led with an accuracy of 0.78±0.01, recall of 0.60±0.03, and F1-score of 0.73±0.03. MobileNetV4 and ResNet50 performed similarly yet were inferior to Vision Transformer while U-Net performance metrics was inferior to all. In the end-to-end model, Vision Transformer continued to perform strongly, achieving 0.76±0.02 accuracy, 0.43±0.04 precision, 0.60±0.04 F1 and an AUC of 0.83.
Conclusion:An AI model was able to automatically detect postoperative incisions and infections across a large, diverse dataset of patient-submitted wound images. This model has the potential to reduce clinical workload, improve postoperative monitoring, and patient experience.
Back to 2025 Abstracts