Online Poster Portal

  • Author
    Varun Gudapati
  • Co-author

    Varun Gudapati

  • Title

    Machine-Learning Enabled Interactive Virtual Reality Simulator for Preoperative Planning of Endoscopic Sinus and Skull-Based Surgery

  • Abstract


    Interactive virtual reality (VR) is poised to transform surgical education but has been unable to realize its potential as a clinical tool due to barriers in image processing. Current simulators relying on manually crafted models do not allow users to view patient-specific anatomy for preoperative planning. Endoscopic sinus surgery presents intricate and challenging anatomy that varies greatly between patients and cannot be comprehensively captured by these standardized models. We aim to harness our novel, data-efficient subspace approximation with augmented kernels (Saak) transform-based machine learning method to automatically segment sinus CT scans and prepare a VR tool to explore a specific patient’s anatomy prior to endoscopic sinus surgery.

    Materials and Methods

    We manually segmented the soft tissue components of 548 images derived from CT scans using Amira. Our Saak transform-based machine learning algorithm consisted of a multi-stage feed-forward Saak-transform, based on the KLT, equipped with a random forest classifier. We randomly selected images to train our Saak-based machine learning segmentation algorithm. We then used our algorithm to segment the remaining images in a scan. We validated the segmentation result using the dice similarity coefficient (DSC) and intersection over union (IOU) before exporting the segmented scans to a Unityinterface that we designed to enable endoscopic exploration and mapping.

    Results and Discussion

    Our Saak transform-based machine learning method produced DSC and IOU coefficients of 0.982 ± 0.009 and 0.965 ± 0.018 respectively, suggesting a high degree of accuracy. More importantly, qualitative examination of the segmented results revealed minimal noise, making the scans well adapted for use in the VR domain. The automatic segmentation method successfully enabled the design of a virtual reality simulator with controllable endoscopic camera, with accurate mapping of the probe’s real time location to 2D axial CT slices, as shown in the figure below.


    Automatic segmentation allows us to efficiently generate patient-specific anatomical models, taking a key step in making VR an accessible and clinically relevant tool. With continuing strides in machine learning-based image processing and VR technology, these simulators may allow us to not just visualize anatomy but also rehearse full surgeries on specific patients.

  • College


  • Zoom

  • PDF