Situating Artificial Intelligence In Surgery: A Focus On Disease Severity
James R Korndorffer, Jr., Mary T Hawn, David A Spain, *Lisa M Knowlton, *Dan E Azagury, *Aussama K Nassar, *James N Lau, *Katherine Arnow, *Amber Trickey, Carla M Pugh
Stanford University, Stanford, CA
OBJECTIVES: Artificial intelligence (AI) has numerous applications in surgical quality assurance. We assessed AI accuracy in evaluating the critical view of safety (CVS) and intraoperative-events during laparoscopic cholecystectomy and hypothesized that AI accuracy and intraoperative-events are associated with disease severity.
METHODS: 1051 laparoscopic cholecystectomy videos were annotated by AI for disease severity (Parkland Scale), CVS achievement (Strasberg Criteria) and intraoperative-events. Surgeons performed focused video review on procedures with ≥1 intraoperative-events (n=225). AI vs. surgeon annotation of CVS components and intraoperative-events was compared. For all cases (n=1051), intraoperative-event association with CVS achievement and severity was examined using ordinal logistic regression.
RESULTS: With AI, surgeons reviewed 50 videos/hr. CVS was achieved in <10% of cases. Hepatocystic triangle and cystic plate visualization was achieved more often in low-severity cases (p < 0.03). AI-surgeon agreement for all CVS components exceeded 75% (kappa 0.19-0.54), with higher agreement in low-severity cases (p< 0.03). Surgeons agreed with 99% of AI-annotated intraoperative-events. AI-annotated intraoperative-events were associated with both disease severity and number of CVS components not achieved (Table). Intraoperative-events occurred more frequently in high-severity vs. low-severity cases (0.98 vs. 0.40 events/case, p<0.001)
CONCLUSIONS: AI annotation allows for efficient video review and is a promising quality assurance tool. Disease severity may limit its use and surgeon oversight is still required in complex cases. Continued refinement may improve AI applicability and allow for automated assessment.
Back to 2020 Abstracts