Constantinos Loukas, Athanasios Gazis, Meletios A Kanakis
November 2020. JSLS Journal of the Society of Laparoscopic & Robotic Surgeons 24(4):e2020.00057
Background and Objectives: Current approaches in surgical skills assessment employ virtual reality simulators, motion sensors, and task-specific checklists. Although accurate, these methods may be complex in the interpretation of the generated measures of performance. The aim of this study is to propose an alternative methodology for skills assessment and classification, based on video annotation of laparoscopic tasks. Methods: Two groups of 32 trainees (students and residents) performed two laparoscopic tasks: peg transfer (PT) and knot tying (KT). Each task was annotated via a video analysis software based on a vocabulary of eight surgical gestures (surgemes) that denote the elementary gestures required to perform a task. The extracted metrics included duration/counts of each surgeme, penalty events, and counts of sequential surgemes (transitions). Our analysis focused on trainees’ skill level comparison and classification using a nearest neighbor approach. The classification was assessed via accuracy, sensitivity, and specificity. Results: For PT, almost all metrics showed significant performance difference between the two groups (p < 0.001). Residents were able to complete the task with fewer, shorter surgemes and fewer penalty events. Moreover, residents performed significantly fewer transitions (p < 0.05). For KT, residents performed two surgemes in significantly shorter time (p < 0.05). The metrics derived from the video annotations were also able to recognize the trainees’ skill level with 0.71 – 0.86 accuracy, 0.80 – 1.00 sensitivity, and 0.60 – 0.80 specificity. Conclusion: The proposed technique provides a tool for skills assessment and experience classification of surgical trainees, as well as an intuitive way for describing what and how surgemes are performed.