Reliable transbronchial access of peripheral lung lesions is desirable for the diagnosis and potential treatment
of lung cancer. This procedure can be difficult, however, because accessory devices (e.g., needle or forceps)
cannot be reliably localized while deployed. We present a fluoroscopic image-guided intervention (IGI) system
for tracking such bronchoscopic accessories. Fluoroscopy, an imaging technology currently utilized by many
bronchoscopists, has a fundamental shortcoming - many lung lesions are invisible in its images. Our IGI
system aligns a digitally reconstructed radiograph (DRR) defined from a pre-operative computed tomography
(CT) scan with live fluoroscopic images. Radiopaque accessory devices are readily apparent in fluoroscopic video,
while lesions lacking a fluoroscopic signature but identifiable in the CT scan are superimposed in the scene. The
IGI system processing steps consist of: (1) calibrating the fluoroscopic imaging system; (2) registering the CT
anatomy with its depiction in the fluoroscopic scene; (3) optical tracking to continually update the DRR and
target positions as the fluoroscope is moved about the patient. The end result is a continuous correlation of the
DRR and projected targets with the anatomy depicted in the live fluoroscopic video feed. Because both targets
and bronchoscopic devices are readily apparent in arbitrary fluoroscopic orientations, multiplane guidance is
straightforward. The system tracks in real-time with no computational lag. We have measured a mean projected
tracking accuracy of 1.0 mm in a phantom and present results from an in vivo animal study.
Past work has shown that guidance systems help improve both the navigation through airways and final biopsy of regions
of interest via bronchoscopy. We have previously proposed an image-based bronchoscopic guidance system. The system,
however, has three issues that arise during navigation: 1) sudden disorienting changes can occur in endoluminal views; 2)
more feedback could be afforded during navigation; and 3) the system's graphical user interface (GUI) lacks a convenient
interface for smooth navigation between bifurcations. In order to alleviate these issues, we present an improved navigation
system. The improvements offer the following: 1) an enhanced visual presentation; 2) smooth navigation; 3) an interface
for handling registration errors; and 4) improved bifurcation-point identification. The improved navigation system thus
provides significant ergonomic and navigational advantages over the previous system.
Bronchoscopy is often performed for diagnosing lung cancer. The recent development of multidetector CT (MDCT) scanners and ultrathin bronchoscopes now enable the bronchoscopic biopsy and treatment of peripheral regions of interest (ROIs). Because the peripheral ROIs are often located several generations within the airway tree, careful planning is required prior to a procedure. The current practice for planning peripheral bronchoscopic procedures, however, is difficult, error-prone, and time-consuming. We propose a system for planning peripheral bronchoscopic procedures using patient-specific MDCT chest scans. The planning process begins with a semi-automatic segmentation of ROIs. The remaining system components are completely automatic, beginning with a new strategy for tracheobronchial airway-tree segmentation. The system then uses a new locally-adaptive approach for finding the interior airway-wall surfaces. From the polygonal airway-tree surfaces, a centerline-analysis method extracts the central axes of the airway tree. The system's route-planning component then analyzes the data generated in the previous stages to determine an appropriate path through the airway tree to the ROI. Finally, an automated report generator gives quantitative data about the route and both static and dynamic previews of the procedure. These previews consist of virtual bronchoscopic endoluminal renderings at bifurcations encountered along the route and renderings of the airway tree and ROI at the suggested biopsy location. The system is currently in use for a human lung-cancer patient pilot study involving the planning and subsequent live image-based guidance of suspect peripheral cancer nodules.
Previous research has indicated that use of guidance systems during endoscopy can improve the performance
and decrease the skill variation of physicians. Current guidance systems, however, rely on
computationally intensive registration techniques or costly and error-prone electromagnetic (E/M)
registration techniques, neither of which fit seamlessly into the clinical workflow. We have previously
proposed a real-time image-based registration technique that addresses both of these problems. We
now propose a system-level approach that incorporates this technique into a complete paradigm for
real-time image-based guidance in order to provide a physician with continuously-updated navigational
and guidance information. At the core of the system is a novel strategy for guidance of endoscopy. Additional
elements such as global surface rendering, local cross-sectional views, and pertinent distances
are also incorporated into the system to provide additional utility to the physician. Phantom results
were generated using bronchoscopy performed on a rapid prototype model of a human tracheobronchial
airway tree. The system has also been tested in ongoing live human tests. Thus far, ten such tests,
focused on bronchoscopic intervention of pulmonary patients, have been run successfully.
KEYWORDS: 3D image processing, Visualization, Image segmentation, Data mining, Visual analytics, Surgery, 3D visualizations, Computing systems, Image processing, 3D scanning
Modern micro-CT scanners produce very large 3D digital images of arterial trees. A typical 3D micro-CT image can consist of several hundred megabytes of image data, with a voxel resolution on the order of ten microns. The analysis and subsequent visualization of such images poses a considerable challenge. We describe a computer-based system for analyzing and visualizing such large 3D data sets. The system, dubbed the Tree Analyzer, processes an image in four major stages. In the first two stages, a series of automated 3D image-processing operations are applied to an input 3D digital image to produce a raw arterial tree and several supplemental data structures describing the tree (central-axis structure, surface rendering polygonal data, quantitative description of all tree branches). Next, the human interacts with the system to visualize and correct potential defects in the extracted raw tree. A series of sophisticated 3D editing tools and automated operations are available for this step. Finally, the corrected tree can be visualized and manipulated for data mining, using a large number of graphics-based rendering tools, such as 3D stereo viewing, global and local surface rendering, sliding-thin slabs, multiplanar reformatted views, projection images, and an interactive tree map. Quantitative data can also be perused for the tree. Results are presented for 3D micro-CT images of the heart and liver.
The standard procedure for diagnosing lung cancer involves two
stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patient's interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.
Very large 3D digital images of arterial trees can be produced by many imaging scanners. While many automatic approaches have been proposed that can begin the process of defining the 3D arterial tree captured in such an image, none guarantee complete, accurate definition. This paper proposes semi-automatic techniques for coming closer to the ultimate goal of defining a complete and accurate 3D arterial tree. As pointed out previously, automated techniques are essential for beginning the process of extracting a complex 3D arterial tree from a large 3D micro-CT image. Yet, many problems arise in this definition of the tree. Our system, initially expounded upon in an earlier effort, uses a series of interactive and semi-automatic tools to examine and correct the identified problems. The system has 3D graphical tools for viewing global and local renderings of the extracted tree, revealing sliding thin-slab views, maximum-intensity projections, multi-planar reformatted slices, and a global/local 2D graphical tree map. The user can invoke several semi-automatic tools for changing the tree as well. The presented results demonstrate the potential of the system.
Modern micro-CT and multidetector helical CT scanners can produce high-resolution 3D digital images of various anatomical tree structures, such as the coronary or hepatic vasculature and the airway tree. The sheer size and complexity of these trees make it essentially impossible to define them interactively. Automatic approaches, using techniques such as image segmentation, thinning, and centerline definition, have been proposed for a few specific problems. None of these approaches, however, can guarantee extracting geometrically accurate multigenerational tree structures. This limits their utility for detailed quantitative analysis of a tree. This paper proposes an approach for accurately defining 3D trees depicted in large 3D CT images. Our approach utilizes a three-stage analysis paradigm: (1) Apply an automated technique to make a "first cut" at defining the tree. (2) Analyze the automatically defined tree to identify possible errors. (3) Use a series of interactive tools to examine and correct each of the identified errors. At the end of this analysis, in principle, a more useful tree will be defined. Our paper will present a preliminary description of this paradigm and give some early results with 3D micro-CT images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.