Improving Surgical Workflow Recognition with Neural Networks

Published on September 7, 2022

Computer-assisted surgery has made leaps and bounds, driving advancements in methodology and technology. To recognize surgical workflows, computer vision-based methods have been widely adopted but collecting annotated data poses challenges. In this study, researchers focus on the problem of limited labeled data and propose a knowledge transfer learning method using a Convolutional-De-Convolutional (CDC) neural network. This network performs both semantic abstraction through neural convolution and frame-level resolution through neural de-convolution. By fine-tuning the CDC network with transfer learning, the model successfully classifies surgical phases with an accuracy of 91.4%. The proposed method extracts essential surgical features and accurately identifies the phase of the procedure, addressing the issue of data deficiency. To further explore this topic, check out the full article.

Computer-assisted surgery (CAS) has occupied an important position in modern surgery, further stimulating the progress of methodology and technology. In recent years, a large number of computer vision-based methods have been widely used in surgical workflow recognition tasks. For training the models, a lot of annotated data are necessary. However, the annotation of surgical data requires expert knowledge and thus becomes difficult and time-consuming. In this paper, we focus on the problem of data deficiency and propose a knowledge transfer learning method based on artificial neural network to compensate a small amount of labeled training data. To solve this problem, we propose an unsupervised method for pre-training a Convolutional-De-Convolutional (CDC) neural network for sequencing surgical workflow frames, which performs neural convolution in space (for semantic abstraction) and neural de-convolution in time (for frame level resolution) simultaneously. Specifically, through neural convolution transfer learning, we only fine-tuned the CDC neural network to classify the surgical phase. We performed some experiments for validating the model, and it showed that the proposed model can effectively extract the surgical feature and determine the surgical phase. The accuracy (Acc), recall, precision (Pres) of our model reached 91.4, 78.9, and 82.5%, respectively.

Read Full Article (External Site)

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>