This contribution presents a novel method for image-guided navigation in oncological liver surgery. It enables the perpetuation of the registration for deeply located intrahepatic structures during the resection. For this purpose, navigation aids localizable by an electro-magnetic tracking system are anchored within the liver. Position and orientation data gained from the navigation aids are used to parameterize a real-time deformation model. This approach enables for the first time the real-time monitoring of target structures also in the depth of the intraoperatively deformed liver. The dynamic behavior of the deformation model has been evaluated with a silicon phantom. First experiments have been carried out with pig livers ex vivo.
We propose a procedure for the intraoperative generation of attributed relational vessel graphs. It builds the prerequisite for a vessel-based registration of a virtual, patient-individual, preoperative, three-dimensional liver model with the intraopeatively deformed liver by graph matching. An image processing pipeline is proposed to extract an abstract representation of the vascular anatomy from intraoperatively acquired three-dimensional ultrasound. The procedure is transferable to other vascularized soft tissues like the brain or the kidneys. We believe that our approach is suitable for intraoperative application as basis for efficient vessel-based registration of the surgical volume of interest. By reducing the problem of intraoperative registration in visceral surgery to the mapping of corresponding attributed relational vessel graphs a fast and reliable registration seems feasible even in the depth of deformed vascularized soft tissues like in human livers.
KEYWORDS: Image segmentation, 3D modeling, Liver, Visualization, 3D visualizations, 3D image processing, Medical imaging, 3D acquisition, Image processing, Kidney
In medical imaging, segmentation is an important step for many visualization tasks and image-guided procedures. Except for very rare cases, automatic segmentation methods cannot guarantee to provide the correct segmentation. Therefore, for clinical usage, physicians insist on full control over the segmentation result, i.e., to verify and interactively correct the segmentation (if necessary). Display and interaction in 2D slices (original or multi-planar reformatted) are more precise than in 3D visualizations and therefore indispensable for segmentation, verification and correction. The usage of slices in more than one orientation (multi-planar reformatted slices) helps to avoid inconsistencies between 2D segmentation results in neighboring slices. For the verification and correction of three-dimensional segmentations as well as for generating a new 3D segmentation, it is therefore desirable to have a method that constructs a new or improved 3D segmentation from 2D segmentation results. The proposed method enables to quickly extend segmentations performed on intersecting slices of arbitrary orientation to a three-dimensional surface model by means of interpolation with specialized Coons patches. It can be used as a segmentation tool of its own as well as for making more sophisticated segmentation methods (that need an initialization close to the boundary to detect) feasible for clinical routine.
A substantial component of an image-guided surgery system (IGSS) is the kind of three-dimensional (3D) presentation to the surgeon because the visual depth perception of the complex anatomy is of significant relevance for orientation. Therefore, we examined for this contribution four different visualization techniques, which were evaluated by eight surgeons. The IGSS developed by our group supports the intraoperative orientation of the surgeon by presenting a visualization of the spatially tracked surgical instruments with respect to vitally important intrahepatic vessels, the tumor, and preoperatively calculated resection planes. In the preliminary trial presented here, we examined the human ability to perceive an intraoperative virtual scene and to solve given navigation tasks. The focus of the experiments was to measure the ability of eight surgeons to orientate themselves intrahepatically and to transfer the perceived virtual spatial relations to movements in real space. With auto-stereoscopic visualization making use of a prism-based display the navigation can be performed faster and more accurate than with the other visualization techniques.
In this contribution a postprocessing method is introduced that enables dynamic accuracy examinations of position and angle measurements of two not interfering localizing systems describing the same subspace of Euclidian R3. Furthermore, the method can be used for realization of a hybrid localizing system given a common temporal synchronization of the measurements. Therewith, this article provides a flexible method for examining the influence of the operating room on magnetic tracking by dynamic comparison with reference measurements of an optical localizing system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.