Transbronchial needle aspiration (TBNA) is a common method used to collect tissue for diagnosis of different chest
diseases and for staging lung cancer, but the procedure has technical limitations. These limitations are mostly related to
the difficulty of accurately placing the biopsy needles into the target mass. Currently, pulmonologists plan TBNA by
examining a number of Computed Tomography (CT) scan slices before the operation. Then, they manipulate the
bronchoscope down the respiratory track and blindly direct the biopsy. Thus, the biopsy success rate is low. The
diagnostic yield of TBNA is approximately 70 percent.
To enhance the accuracy of TBNA, we developed a TBNA needle with a tip position that can be electromagnetically
tracked. The needle was used to estimate the bronchoscope's tip position and enable the creation of corresponding
virtual bronchoscopic images from a preoperative CT scan. The TBNA needle was made with a flexible catheter
embedding Wang Transbronchial Histology Needle and a sensor tracked by electromagnetic field generator. We used
Aurora system for electromagnetic tracking.
We also constructed an image-guided research prototype system incorporating the needle and providing a user-friendly
interface to assist the pulmonologist in targeting lesions. To test the feasibility of the accuracy of the newly developed
electromagnetically-tracked needle, a phantom study was conducted in the interventional suite at Georgetown University
Hospital. Five TBNA simulations with a custom-made phantom with a bronchial tree were performed. The experimental
results show that our device has potential to enhance the accuracy of TBNA.
KEYWORDS: High dynamic range imaging, Video, Cameras, Bronchoscopy, Light sources and illumination, Camera shutters, Video acceleration, Image registration, 3D image processing, 3D modeling
In this paper, we present the design and implementation of a new rendering method based on high dynamic range (HDR)
lighting and exposure control. This rendering method is applied to create video images for a 3D virtual bronchoscopy
system. One of the main optical parameters of a bronchoscope's camera is the sensor exposure. The exposure adjustment
is needed since the dynamic range of most digital video cameras is narrower than the high dynamic range of real scenes.
The dynamic range of a camera is defined as the ratio of the brightest point of an image to the darkest point of the same
image where details are present. In a video camera exposure is controlled by shutter speed and the lens aperture. To
create the virtual bronchoscopic images, we first rendered a raw image in absolute units (luminance); then, we simulated
exposure by mapping the computed values to the values appropriate for video-acquired images using a tone mapping
operator. We generated several images with HDR and others with low dynamic range (LDR), and then compared their
quality by applying them to a 2D/3D video-based tracking system. We conclude that images with HDR are closer to real
bronchoscopy images than those with LDR, and thus, that HDR lighting can improve the accuracy of image-based
tracking.
Dynamic or 4D images (in which a section of the body is repeatedly imaged in order to capture physiological motion) are becoming increasingly important in medicine. These images are especially critical to the field of image-guided therapy, because they enable treatment planning that reflects the realistic motion of the therapy target. Although it is possible to acquire static images and deform them based on generalized assumptions of normal motion, such an approach does not account for variability in the individual patient. To enable the most effective treatments, it is necessary to be able to image each patient and characterize their unique respiratory motion, but software specifically designed around the needs of 4D imaging is not widely available. We have constructed an open source application that allows a user to manipulate and analyze 4D image data. This interface can load DICOM images into memory, reorder/rebin them if necessary, and then apply deformable registration methods to derive the respiratory motion. The interface allows output and display of the deformation field, display of images with the deformation field as an overlay, and tables and graphs of motion versus time. The registration is based on the open source Insight Toolkit (ITK) and the interface is constructed using the open source GUI tool FLTK, which will make it easy to distribute and extend this software in the future.
Volume measurement plays an important role in many medical applications in which physicians need to quantify tumor growth over time. For example, tumor volume estimation can help physicians diagnose patients and evaluate the effects of therapy. These measurements can also help researchers compare segmentation methods. For researchers to quickly check the results of volume data processing, they need a graphical interface with volume visualization features. VolView is an interactive visualization environment which provides such an interface. The "plug-in" architecture of VolView allows it to be used as a visualization platform for evaluation of advanced image processing algorithms. In this work, we implemented VolView plug-ins for two volume measurement algorithms and three volume comparison algorithms. One volume measurement algorithm involves voxel counting and the other provides finer volume measurement by anti-aliasing the tumor volume. The three volume comparison methods are a maximum surface distance measure, mean absolute surface distance, and a volumetric overlap measure. In this implementation, we rely heavily on software components from the open source Insight Segmentation and Registration Toolkit (ITK). The paper also presents the use of the VolView environment to evaluate liver tumor segmentation based on level set techniques. The simultaneous truth and performance level estimation (STAPLE) method was used to evaluate the estimated ground truth from multiple radiologists.
4D images (3 spatial dimensions plus time) using CT or MRI will play a key role in radiation medicine as techniques for respiratory motion compensation become more widely available. Advance knowledge of the motion of a tumor and its surrounding anatomy will allow the creation of highly conformal dose distributions in organs such as the lung, liver, and pancreas. However, many of the current investigations into 4D imaging rely on synchronizing the image acquisition with an external respiratory signal such as skin motion, tidal flow, or lung volume, which typically requires specialized hardware and modifications to the scanner. We propose a novel method for 4D image acquisition that does not require any specific gating equipment and is based solely on open source image registration algorithms. Specifically, we use the Insight Toolkit (ITK) to compute the normalized mutual information (NMI) between images taken at different times and use that value as an index of respiratory phase. This method has the advantages of (1) being able to be implemented without any hardware modification to the scanner, and (2) basing the respiratory phase on changes in internal anatomy rather than external signal. We have demonstrated the capabilities of this method with CT fluoroscopy data acquired from a swine model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.